FrameTicks->Automatic in ClickPane causes constant processor activity - wolfram-mathematica

I discovered that a MatrixPlot in a ClickPane causes one of my processor cores to cycle at about 50% activity if the option FrameTicks -> Automatic is present. I have cut down the code to the following (which doesn't do anything really):
a = ConstantArray[0, {2, 11}];
ClickPane[
Dynamic#
MatrixPlot[a, FrameTicks -> Automatic],
# &
]
Switching to FrameTicks -> None stops the core activity.
To study the processor behavior I let a Clock cycle between None and Automatic every 20 secs (remove the above ClickPane first):
ClickPane[
Dynamic#
MatrixPlot[a, FrameTicks -> ft],
# &
]
Dynamic[ft = {Automatic, None}[[Clock[{1, 2, 1}, 20]]]]
This gives me the following processor activity display:
This is on my Win7-64 / MMA 8.0.1 system.
My questions are:
Is this reproducible on other systems?
Am I doing something wrong or is this a bug?
Why does the bare MatrixPlot[a] (without any FrameTicks setting whatsoever) have these odd-looking frame tick choices?

Win XP Mma 8.0
The missed peaks correspond to times when the notebook was hidden by another window. Losing focus does not stop the CPU draining.

It seems that the Automatic setting of the FrameTicks option changes an internal variable that can be seen by Dynamic. It is apparently (and I think erroneously) not localized. This causes a complete re-evaluation of the argument of the Dynamic.
A workaround would be to add Refresh, which enables the use of TrackedSymbols, so that we can restrict triggering to just the variables we're interested in, in this case array a and the FrameTicks options value ft:
ClickPane[
Dynamic#
Refresh[
MatrixPlot[a, FrameTicks -> ft],
TrackedSymbols -> {a, ft}],
# &
]
Dynamic[ft = {Automatic, None}[[Clock[{1, 2, 1}, 20]]]]
My processor status stays flat at close to zero now.

Confirmed also on Mac OS X 10.6, Mathematica 8.0.1.
I thought at first that this was something to do with the kernel having to recalculate where the ticks went, every time FrameTicks->Automatic option was set.
So I tried this and got the same result. Likewise for ArrayPlot.
With[{fta = FrameTicks /.
FullForm[MatrixPlot[a, FrameTicks -> Automatic]][[1, 4]]},
ClickPane[Dynamic#MatrixPlot[a, FrameTicks -> ft], # &]
Dynamic[ft = {fta, None}[[Clock[{1, 2, 1}, 20]]]] ]
But not for this plot - CPU usage barely moved between the two states.:
ClickPane[
Dynamic#Plot[Sin[x], {x, 0, 6 Pi}, Frame -> True,
FrameTicks -> ft], # &]
Dynamic[ft = {Automatic, None}[[Clock[{1, 2, 1}, 20]]]]
I can only surmise that there must be some inefficiency in the way FrameTicks are displayed on these raster-type plot.
In answer to your third question, the odd tick choice doesn't reproduce on my system.

Related

How to play a SuperCollider file non-interactively from the terminal command line interface (CLI) either on speakers or saving to an output file?

I'm trying to have some fun with SuperCollider, and fun to me means running commands in a shell!
So far I've managed to play to speakers with:
rs.scd
s.waitForBoot({
// Play scale once.
x = Pbind(\degree, Pseq([0, 3, 5, 7], 1), \dur, 0.25);
x.play;
});
with:
sclang rs.scd
and save to a file as mentioned at https://doc.sccode.org/Guides/Non-Realtime-Synthesis.html with:
nrs.csd
x = Pbind(\degree, Pseq([0, 3, 5, 7], 1), \dur, 0.25).asScore(1, timeOffset: 0.001);
x.add([0.0, [\d_recv, SynthDescLib.global[\default].def.asBytes]]);
x.sort;
x.recordNRT(
outputFilePath: "nrt.aiff",
sampleRate: 44100,
headerFormat: "AIFF",
sampleFormat: "int16",
options: ServerOptions.new.numOutputBusChannels_(2),
duration: x.endTime
);
0.exit;
So to achieve my goal I'm missing:
how to quit automatically after rs.scd finishes playing? I could do 1.wait; 0.exit; but that forces me to hardcode the 1, which is the length in seconds of the 4 0.25s notes being played. That 1 is also hardcoded in nrs.csd, and it would be great to be able to factor it out there as well.
how to make a single file that can either play on speakers, or save to a file, e.g. based on command line options I choose when running it? I think I'll eventually manage by playing with thisProcess.argv and an if, but is there a simpler way?
I was playing with CSound previously, and it was so much simpler to get a similar "hello world" to work there.
Tested on SuperCollider 3.10, Ubuntu 20.04.
For your first question:
In x.recordNRT you can add an action. This function will execute after the score finishes.
...
x.recordNRT(
...
duration: x.endTime,
action: {0.exit}
);
For your second question:
This is an unusual use case. I don't know a better method than argv and an if statement. (See also https://doc.sccode.org/Classes/Main.html#-argv )
Things that you can put before the if include:
Constructing your pattern and creating your synthdefs.
Things that need to go after the If include sending the SynthDefs to the server, as the NRT server is not the same as the local server. See the helpfile you linked for some warnings about that.

In Asciidoc (and Asciidoctor), how do I format console output best?

In my adoc document, I want to show some output logging to the console.
Should I use [source], [source,shell] or nothing before the ----?
----
Solving started: time spent (67), best score (-20init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0).
CH step (0), time spent (128), score (-18init/0hard/0soft), selected move count (15), picked move ([Math(101) {null -> Room A}, Math(101) {null -> MONDAY 08:30}]).
CH step (1), time spent (145), score (-16init/0hard/0soft), selected move count (15), picked move ([Physics(102) {null -> Room A}, Physics(102) {null -> MONDAY 09:30}]).
----
I'd argue it's not really source code (it's output) and I definitely don't output text that happen to contain shell language syntax to be code colored as shell language (because it's not).
The [source] and plain ---- notations are identical in this case. I would use either (your preference), without the shell type specifier, to get plaintext.

How to slow down Framer animations

I'm looking for a solution to slow down FramerJS animations by a certain amplitude.
In the Velocity Animation framework it's posible to do Velocity.mock = 10, to slow down everything by a factor of 10.
Either the docs are lacking in the respect, or this feature doesn't currently exist and should really be implemented.
You can use
Framer.Loop.delta = 1 / 120
to slow down all the animations by a factor of 2. The default value is 1 / 60.
While Javier's answer works for most animations, it doesn't apply to delays. While not ideal, the method I've adopted is to set up a debugging variable and function, and pass every time-related value through it:
slowdown = 5
s = (ms) ->
return ms * slowdown
Then use it like so:
Framer.Defaults.Animation =
time: s 0.3
…and:
Utils.delay s(0.3), ->
myLayer.sendToBack()
Setting the slowdown variable to 1 will use your standard timing (anything times 1 is itself).

OpenGL core profile incredible slowdown on OS X

I added a new GL renderer to my engine, which uses the core profile. While it runs fine on Windows and/or nvidia cards, it is like 10 times slower on OS X (3 fps instead of 30). The weird thing is, that my compatibility profile renderer runs fine.
I collected some traces with Instruments and the GL profiler:
https://www.dropbox.com/sh/311fg9wu0zrarzm/31CGvUcf2q
It shows that the application spends its time in glDrawRangeElements.
I tried the following things:
use glDrawElements instead (no effect)
flip culling (no effect on speed)
disable some GL_DYNAMIC_DRAW buffers (no effect)
bind index buffer after VAO when drawing (no effect)
converted indices to 4 byte (no effect)
use GL_BGRA textures (no effect)
What I didn't try is to align my vertices to 16 byte boundary and/or convert indices to 4 byte, but seriously, if that would be the issue then why the hell does the standard allow it?
I'm creating the context like this:
NSOpenGLPixelFormatAttribute attributes[] =
{
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFAStencilSize, 8,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAAccelerated,
NSOpenGLPFANoRecovery,
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core,
0
};
NSOpenGLPixelFormat* format = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
NSOpenGLContext* context = [[NSOpenGLContext alloc] initWithFormat:format shareContext:nil];
[self.view setOpenGLContext:context];
[context makeCurrentContext];
Tried on the following specs:
radeon 6630M, OS X 10.7.5
radeon 6750M, OS X 10.7.5
geforce GT 330M, OS X 10.8.3
Do you have any ideas what I might do wrong? Again, it works fine with the compatibility profile (not using VAOs though).
UPDATE: reported to Apple.
UPDATE: Apple doesn't give a damn to the problem...anyway I created a small test program which is actually good. Now I compared the call stack with Instruments, and found out that when using the engine, glDrawRangeElements does two calls:
gleDrawArraysOrElements_ExecCore
gleDrawArraysOrElements_Entries_Body
while in the test program it calls only the second. Now the first call does something like an immediate mode render (gleFlushPrimitivesTCLFunc, gleRunVertexSubmitterImmediate), so obviously casues the slowdown.
Finally, I was able to reproduce the slowdown. This is just crazy... It is clearly caused by glBindAttribLocation being called on the "my_Position" attribute. Now I did some testing:
1 is default (as returned by glGetAttribLocation)
if I set it to zero, theres no problem
if I set it to 1, the rendering becomes slow
if I set it to any larger number, it is slow again
Obviously I relink the program (check code). It is not a problem in the implementation, I tested it with "normal" values too.
Test program:
https://www.dropbox.com/s/dgg48g1fwgyc5h0/SLOWDOWN_REPRO.zip
How to repro:
open with XCode
open common/glext.h (don't be disturbed by the name)
modify the GLDECLUSAGE_POSITION constant from 0 to 1
compile and run => slow
changing back to zero => good
I have managed to get myself the same problem in the following circumstance under
OS X Mavericks:
Instanced rendering using array buffers to give each instance its own modelToWorld and inverseNormal matrices; attribute locations are being specified through layout rather than using glGetAttribLocation
leaving one of these array buffers unused in the shader, where location is declared but the attribute isn't actually used for anything in the glsl code
In this case, a call to glDrawElementsInstanced takes up a LOT of CPU time (under normal circumstances, this call uses nearly zero CPU even when drawing several thousand instances).
You can tell that you're getting this specific problem if almost all of the CPU time used within glDrawElementsInstanced is spent in gleDrawArraysOrElements_ExecCore. Making sure that all of the array buffers are actually referenced in your shader code fixes the CPU time back to (nearly) zero.
I suspect that this is one of the situations where leaving a variable out of your main() in glsl confuses the compiler in to deleting all reference to that variable, leaving you with a dangling reference to an attribute or uniform.

Self-restarting MathKernel - is it possible in Mathematica?

This question comes from the recent question "Correct way to cap Mathematica memory use?"
I wonder, is it possible to programmatically restart MathKernel keeping the current FrontEnd process connected to new MathKernel process and evaluating some code in new MathKernel session? I mean a "transparent" restart which allows a user to continue working with the FrontEnd while having new fresh MathKernel process with some code from the previous kernel evaluated/evaluating in it?
The motivation for the question is to have a way to automatize restarting of MathKernel when it takes too much memory without breaking the computation. In other words, the computation should be automatically continued in new MathKernel process without interaction with the user (but keeping the ability for user to interact with the Mathematica as it was originally). The details on what code should be evaluated in new kernel are of course specific for each computational task. I am looking for a general solution how to automatically continue the computation.
From a comment by Arnoud Buzing yesterday, on Stack Exchange Mathematica chat, quoting entirely:
In a notebook, if you have multiple cells you can put Quit in a cell by itself and set this option:
SetOptions[$FrontEnd, "ClearEvaluationQueueOnKernelQuit" -> False]
Then if you have a cell above it and below it and select all three and evaluate, the kernel will Quit but the frontend evaluation queue will continue (and restart the kernel for the last cell).
-- Arnoud Buzing
The following approach runs one kernel to open a front-end with its own kernel, which is then closed and reopened, renewing the second kernel.
This file is the MathKernel input, C:\Temp\test4.m
Needs["JLink`"];
$FrontEndLaunchCommand="Mathematica.exe";
UseFrontEnd[
nb = NotebookOpen["C:\\Temp\\run.nb"];
SelectionMove[nb, Next, Cell];
SelectionEvaluate[nb];
];
Pause[8];
CloseFrontEnd[];
Pause[1];
UseFrontEnd[
nb = NotebookOpen["C:\\Temp\\run.nb"];
Do[SelectionMove[nb, Next, Cell],{12}];
SelectionEvaluate[nb];
];
Pause[8];
CloseFrontEnd[];
Print["Completed"]
The demo notebook, C:\Temp\run.nb contains two cells:
x1 = 0;
Module[{},
While[x1 < 1000000,
If[Mod[x1, 100000] == 0, Print["x1=" <> ToString[x1]]]; x1++];
NotebookSave[EvaluationNotebook[]];
NotebookClose[EvaluationNotebook[]]]
Print[x1]
x1 = 0;
Module[{},
While[x1 < 1000000,
If[Mod[x1, 100000] == 0, Print["x1=" <> ToString[x1]]]; x1++];
NotebookSave[EvaluationNotebook[]];
NotebookClose[EvaluationNotebook[]]]
The initial kernel opens a front-end and runs the first cell, then it quits the front-end, reopens it and runs the second cell.
The whole thing can be run either by pasting (in one go) the MathKernel input into a kernel session, or it can be run from a batch file, e.g. C:\Temp\RunTest2.bat
#echo off
setlocal
PATH = C:\Program Files\Wolfram Research\Mathematica\8.0\;%PATH%
echo Launching MathKernel %TIME%
start MathKernel -noprompt -initfile "C:\Temp\test4.m"
ping localhost -n 30 > nul
echo Terminating MathKernel %TIME%
taskkill /F /FI "IMAGENAME eq MathKernel.exe" > nul
endlocal
It's a little elaborate to set up, and in its current form it depends on knowing how long to wait before closing and restarting the second kernel.
Perhaps the parallel computation machinery could be used for this? Here is a crude set-up that illustrates the idea:
Needs["SubKernels`LocalKernels`"]
doSomeWork[input_] := {$KernelID, Length[input], RandomReal[]}
getTheJobDone[] :=
Module[{subkernel, initsub, resultSoFar = {}}
, initsub[] :=
( subkernel = LaunchKernels[LocalMachine[1]]
; DistributeDefinitions["Global`"]
)
; initsub[]
; While[Length[resultSoFar] < 1000
, DistributeDefinitions[resultSoFar]
; Quiet[ParallelEvaluate[doSomeWork[resultSoFar], subkernel]] /.
{ $Failed :> (Print#"Ouch!"; initsub[])
, r_ :> AppendTo[resultSoFar, r]
}
]
; CloseKernels[subkernel]
; resultSoFar
]
This is an over-elaborate setup to generate a list of 1,000 triples of numbers. getTheJobDone runs a loop that continues until the result list contains the desired number of elements. Each iteration of the loop is evaluated in a subkernel. If the subkernel evaluation fails, the subkernel is relaunched. Otherwise, its return value is added to the result list.
To try this out, evaluate:
getTheJobDone[]
To demonstrate the recovery mechanism, open the Parallel Kernel Status window and kill the subkernel from time-to-time. getTheJobDone will feel the pain and print Ouch! whenever the subkernel dies. However, the overall job continues and the final result is returned.
The error-handling here is very crude and would likely need to be bolstered in a real application. Also, I have not investigated whether really serious error conditions in the subkernels (like running out of memory) would have an adverse effect on the main kernel. If so, then perhaps subkernels could kill themselves if MemoryInUse[] exceeded a predetermined threshold.
Update - Isolating the Main Kernel From Subkernel Crashes
While playing around with this framework, I discovered that any use of shared variables between the main kernel and subkernel rendered Mathematica unstable should the subkernel crash. This includes the use of DistributeDefinitions[resultSoFar] as shown above, and also explicit shared variables using SetSharedVariable.
To work around this problem, I transmitted the resultSoFar through a file. This eliminated the synchronization between the two kernels with the net result that the main kernel remained blissfully unaware of a subkernel crash. It also had the nice side-effect of retaining the intermediate results in the event of a main kernel crash as well. Of course, it also makes the subkernel calls quite a bit slower. But that might not be a problem if each call to the subkernel performs a significant amount of work.
Here are the revised definitions:
Needs["SubKernels`LocalKernels`"]
doSomeWork[] := {$KernelID, Length[Get[$resultFile]], RandomReal[]}
$resultFile = "/some/place/results.dat";
getTheJobDone[] :=
Module[{subkernel, initsub, resultSoFar = {}}
, initsub[] :=
( subkernel = LaunchKernels[LocalMachine[1]]
; DistributeDefinitions["Global`"]
)
; initsub[]
; While[Length[resultSoFar] < 1000
, Put[resultSoFar, $resultFile]
; Quiet[ParallelEvaluate[doSomeWork[], subkernel]] /.
{ $Failed :> (Print#"Ouch!"; CloseKernels[subkernel]; initsub[])
, r_ :> AppendTo[resultSoFar, r]
}
]
; CloseKernels[subkernel]
; resultSoFar
]
I have a similar requirement when I run a CUDAFunction for a long loop and CUDALink ran out of memory (similar here: https://mathematica.stackexchange.com/questions/31412/cudalink-ran-out-of-available-memory). There's no improvement on the memory leak even with the latest Mathematica 10.4 version. I figure out a workaround here and hope that you may find it's useful. The idea is that you use a bash script to call a Mathematica program (run in batch mode) multiple times with passing parameters from the bash script. Here is the detail instruction and demo (This is for Window OS):
To use bash-script in Win_OS you need to install cygwin (https://cygwin.com/install.html).
Convert your mathematica notebook to package (.m) to be able to use in script mode. If you save your notebook using "Save as.." all the command will be converted to comments (this was noted by Wolfram Research), so it's better that you create a package (File->New-Package), then copy and paste your commands to that.
Write the bash script using Vi editor (instead of Notepad or gedit for window) to avoid the problem of "\r" (http://www.linuxquestions.org/questions/programming-9/shell-scripts-in-windows-cygwin-607659/).
Here is a demo of the test.m file
str=$CommandLine;
len=Length[str];
Do[
If[str[[i]]=="-start",
start=ToExpression[str[[i+1]]];
Pause[start];
Print["Done in ",start," second"];
];
,{i,2,len-1}];
This mathematica code read the parameter from a commandline and use it for calculation.
Here is the bash script (script.sh) to run test.m many times with different parameters.
#c:\cygwin64\bin\bash
for ((i=2;i<10;i+=2))
do
math -script test.m -start $i
done
In the cygwin terminal type "chmod a+x script.sh" to enable the script then you can run it by typing "./script.sh".
You can programmatically terminate the kernel using Exit[]. The front end (notebook) will automatically start a new kernel when you next try to evaluate an expression.
Preserving "some code from the previous kernel" is going to be more difficult. You have to decide what you want to preserve. If you think you want to preserve everything, then there's no point in restarting the kernel. If you know what definitions you want to save, you can use DumpSave to write them to a file before terminating the kernel, and then use << to load that file into the new kernel.
On the other hand, if you know what definitions are taking up too much memory, you can use Unset, Clear, ClearAll, or Remove to remove those definitions. You can also set $HistoryLength to something smaller than Infinity (the default) if that's where your memory is going.
Sounds like a job for CleanSlate.
<< Utilities`CleanSlate`;
CleanSlate[]
From: http://library.wolfram.com/infocenter/TechNotes/4718/
"CleanSlate, tries to do everything possible to return the kernel to the state it was in when the CleanSlate.m package was initially loaded."

Resources