How do I downsample a control rate variable to a scalar value? - supercollider

In SuperCollider: How do I downsample a control rate variable to a scalar value?
For instance, I have a scalar global called ~delay and a few functions care about that value. They assume it is a scalar. I wanted to set a envelope generator on that variable in order to change it via a control rate variable. Or use MouseX.kr, if I could convert a single value of MouseX.kr to a scalar value I would be happy.
Assume that I cannot refactor the code to allow for a k-rate global and thus I need to sample or downsample a single value from a control rate variable.
I can't do this:
MouseX.kr(1, 4, 1).rand.wait;
But I'd be happy with this:
downSample(MouseX.kr(1, 4, 1)).rand.wait;
Or
~mousex = MouseX.kr(1, 4, 1)
...
downSample(~mousex).rand.wait

This is the classic SuperCollider language-vs-server issue. You want to use MouseX (which represents the server's knowledge of mouse position) in a language-side calculation. ("Why the split? Why can't the language know it using the same object?" - well, imagine the two processes are running on different machines - different mice...)
To get the mouse position in the language, it's better to use one of:
Platform.getMouseCoords // SC up to 3.6
GUI.cursorPosition // SC recent versions
If you're sure you want to use server data in the language, then your own answer about sending via a Bus is one way to do it. Recent versions of SuperCollider have methods
Bus.getSynchronous
Bus.setSynchronous
which rely on the new "shared memory interface" between language and server. If the two are on the same machine, then this can be a nice way to do it which avoids the latency of asynchronously requesting the info.

After much research I came up with 1 solution: use a control Bus to grab values. We take a function as input (f) and then play it to a bus.
We then read from that bus by calling the get method on the bus and providing function that allows us to extract the value from the function thrown in.
~mkscalarfun = {
arg f={ 0 };
var last = 0.0;
var mbus = Bus.control(s, 1);
var pf = f.play(s,mbus);
var scalarf = {
mbus.get({|v| last = v;});
last;
};
scalarf; // This is a function
};
// Create a closure that includes the bus
~mousescalarf = ~mkscalarfun.({ MouseX.kr(1, 4, 1); });
~mousescalarf.();
~mousescalarf.().rand.wait;
I am not sure how idiomatic this solution or if it is appropriate or how well it performs.
One problem with this solution is that pf is hidden and thus you can't stop it.
One alternative is to use an OO solution where you make a class in your extension directory:
MakeScalarKR {
var last;
var mbus;
var pf;
var f;
*new { arg sbase,f;
^super.new.init(sbase,f)
}
init {
arg sbase,myf;
f = myf;
last = 0.0;
mbus = Bus.control(sbase, 1);
pf = f.play(sbase, mbus);
}
v {
mbus.get({|x| last=x;});
^last
}
free {
pf.free
}
}
Then you can invoke this class like so:
~mkr = MakeScalarKR(s,{ MouseX.kr(10,400,1) });
~mkr.v()

Related

Cypress: global variables in callbacks

I am trying to set up a new Cypress framework and I hit a point where I need help.
The scenario I am trying to work out: a page that is calling the same endpoint after every change and I use an interceptor to wait for the call. As we all know you can’t use the same intercept name for the same request multiple times so I did a trick here. I used a dynamically named alias, which works.
var counters = {}
function registerIntercept(method, url, name) {
counters[name] = 1;
cy.intercept(method, url, (req) => {
var currentCounter = counters[name]++;
cy.wrap(counters).as(counters)
req.alias = name + (currentCounter);
})
}
function waitForCall(call) {
waitForCall_(call + (counters[call.substr(1)]));
// HERE counters[any_previously_added_key] is always 1, even though the counters entry in registerIntercept is bigger
// I suspect counters here is not using the same counters value as registerIntercept
}
function waitForCall_(call) {
cy.wait(call)
cy.wait(100)
}
It is supposed to be used by using waitForCall(“#callalias”) and it will be converted to #callalias1, callalias2 and so on.
The problem is that the global counters is working in registerIntercept but I can’t get its values from waitForCall. It will always retrieve value 1 for a specific key, while the counters key in registerIntercept is already at a bigger number.
This code works if you use it as waitForCall_(“#callalias4”) to wait for the 4th request for example. Each intercept will get an alias ending with an incremented number. But I want to not keep track of how many calls were made and let the code retrieve that from counters and build the wait.
Any idea why counters in waitForCall is not having the same values for its keys as it has in registerIntercept?

Supercollider ERROR: can't set a control to a UGen

I am trying to change the volume using Line.kr but I get this error: ERROR: can't set a control to a UGen
Here is the code:
a = {arg freq=440, vol=0; SinOsc.ar(freq)*vol}.play
a.set(\vol,Line.kr(0,1.0,3))
Any ideas?
This is actually such a basic issue/topic that a more elaborate answer is perhaps warranted. Basically, if you need/want "total flexibility" in what that \vol envelope is, you have to read it from a (server-side) bus or use one of the many client-side wrapper tricks that hide that (bus) plumbing under some syntactic sugar. A typical example of the latter would be JITLib. An example using the latter:
a = Ndef(\a, {arg freq=440, vol=0; SinOsc.ar(freq)*vol}).play;
a.set(\vol, Ndef(\v, { Line.kr(0,1.0,3) }))
If you now do a.edit you'll see somthing like
Without using that JITLib sugar, you'd have allocate and map a bus yourself, such as in:
a = {arg freq=440, vol=0; SinOsc.ar(freq)*vol}.play;
b = Bus.control(s, 1); // "s" is the default-bound server variable
a.map(\vol, b);
c = { Out.kr(b, Line.kr(0,1.0,3);) }.play;
With JITlib you can just set all over as it has "smarts" to detect the argument type, but with the basic SC you need to distinguish between mapping and setting... although you can also set something to a c-led bus-number string to achieve the same effect (map basically does that for you), i.e. the penultimate line above can be a.set(\vol, b.asMap); Just b.asMap evaluates to something like e.g. "c1", meaning control bus 1. (Audio busses use "a" prefixes instead, as you might expect.)
This bit may be somewhat more confusing, but keeping in mind that a and ~a are different types of variables (more or less function stack vs environment stack), the Ndef keys (first Ndef arguments) can be used "directly" in "shortcut" ~variables provided by the ProxySpace as in
p = ProxySpace.push(s);
~a = {arg freq=440, vol=0; SinOsc.ar(freq)*vol};
~a.play; // play the NodeProxy, not the Function (Synth) directly
~a.set(\vol, ~v); // ~v is a NodeProxy.nil(localhost, nil) due to ProxySpace
~v = { Line.kr(0,1.0,3) };
Under the hood this last example practically auto-issues Ndefs, i.e the 2nd line does the same as Ndef(\a ..., so you don't have to type Ndefs explicitly anymore. The way this works is that ProxySpace replaces the currentEnvironment (where ~variables live) with one in which put, which is triggered by assignments to ~variables, is now creating or modifying Ndefs and at is accessing them, e.g. if you evaluate currentEnvironment now it shows something like
ProxySpace ( ~a - ar(1) ~v - kr(1) )
(To get back to your prior environment issue a p.pop now.)
You can't use UGen to set some arg of a SynthDef, but you can set the arguments of Line.kr:
a = {arg freq=440, vol=0; SinOsc.ar(freq)*Line.kr(atk,sus,rel)}.play
a.set(\atk,0,\sus,1,\rel,0)
please note that with Line.kr you can't restart the envelope.
For a more specific control please see the EnvGen UGen:
http://doc.sccode.org/Classes/EnvGen.html

How can I reuse code between Javascript macros and minimize work done within the macros?

I currently have two macros that are part of a (very limited-audience) plugin I'm developing, that both look basically like:
(function(){
exports.name = "name";
exports.params = [
{name: "value"}
];
function get(tiddler) {
// return some contents of some tiddler fields according to some rule
}
function parse(data) {
// convert string to some kind of useful object
}
function logic(x, y) {
// determine whether the two objects correspond in some way
};
function format(data, title) {
// produce WikiText for a link with some additional decoration
};
exports.run = function(value) {
value = parse(value);
var result = [];
this.wiki.each(function(tiddler, title) {
var data = get(tiddler);
if (data !== undefined && logic(value, parse(data))) {
result.push(format(data, title));
}
});
return result.join(" | ");
};
})();
So they're already fairly neatly factored when considered individually; the problem is that only the core logic is really different between the two macros. How can I share the functions get, logic and format between the macros? I tried just putting them in a separate tiddler, but that doesn't work; when the macros run, TW raises an error claiming that the functions are "not defined". Wrapping each function as its own javascript macro in a separate tiddler, e.g.
(function(){
exports.name = "get";
exports.params = [
{name: "tiddler"}
];
exports.run = function(tiddler) {
// return some contents of some tiddler fields according to some rule
}
})();
also didn't help.
I'd also like to set this up to be more modular/flexible, by turning the main get/parse/logic/format process into a custom filter, then letting a normal filter expression take care of the iteration and using e.g. the widget or <> macro to display the items. How exactly do I set this up? The documentation tells me
If the provided filter operators are not enough, a developer can add
new filters by adding a module with the filteroperator type
but I can't find any documentation of the API for this, nor any examples.
How can I share the functions get, logic and format between the macros?
You can use the Common/JS standard require('<tiddler title>') syntax to access the exports of another tiddler. The target tiddler should be set up as a JS module (ie, the type field set to application/javascript and the module-type field set to library). You can see an example here:
https://github.com/Jermolene/TiddlyWiki5/blob/master/core/modules/widgets/count.js#L15
I'd also like to set this up to be more modular/flexible, by turning the main get/parse/logic/format process into a custom filter, then letting a normal filter expression take care of the iteration and using e.g. the widget or <> macro to display the items. How exactly do I set this up?
The API for writing filter operators isn't currently documented, but there are many examples to look at:
https://github.com/Jermolene/TiddlyWiki5/tree/master/core/modules/filters

RXjs : How to create an operator on streams : scan operator where the accumulator state can be reset through an observable

I need to create a new instance operator on streams with the following characteristics
Signature
Rx.Observable.prototype.scan_with_reset(accumulator, seed$)
where :
Arguments
accumulator (Function): An accumulator function to be invoked on each element.
seed$ (Observable) : An observable whose values will be used to restart the accumulator function. The accumulator function has the following signature function accumulator_fn(accumulator_state, source_value). I want the value in seed$ to reset accumulator_state to the seed value and emit the seed value.
Returns
(Observable) : An observable sequence which results from the comonadic bind operation (whatever that means, I am copying Rxjs documentation here). Vs. the normal scan operator, what happens here is that when the accumulator function is 'restarted' from the seed value emitted by the seed$ observable, that seed value is emitted, and the next value to be emitted by the scan_with_reset operator will be accumulator_fn(seed, source_value)
Example of use :
var seed$ = Rx.Observable.fromEvent(document, 'keydown')
.map(function(ev){return ev.keyCode})
.startWith(0);
var result$ = counter$.scan_with_reset(seed$,
function accumulator_fn (acc, counter) {return acc+counter});
The following diagrams should explain more in details the expected results:
seed : 0---------13--------27------------
counter : -1--5--2----6---2-----4---1---3---
result : 0-1--6--8-13-19--21-27-31--32--35-
My initial attempt to do this was to modify the accumulator_fn to have the seed$ modify a variable that would in the scope of accumulator_fn so I can detect changes in the function itself.
I pursue two goals here:
have an implementation which is as stateless and closure-less as possible
understand the mechanics behind defining one's own operators on
streams, of which this would be hopefully a simple example
I had a look at scan source code : https://github.com/Reactive-Extensions/RxJS/blob/master/src/core/linq/observable/scan.js
but I am not sure where to go from there.
Does anybody has any experience in creating Rxjs stream operators? What are the conventions to follow and traps to avoid? Are there any examples of custom-made operators that I could look at? How would you go about implementing this particular one?
[UPDATE] : Some test code for the accepted answer
var seed$ = Rx.Observable.fromEvent(document, 'keydown')
.map(function(ev){return ev.keyCode})
.startWith(0);
var counter$ = Rx.Observable.fromEvent(document, 'mousemove')
.map(function(ev){return 1});
var result$ = counter$.scanWithReset(seed$,
function accumulator_fn (acc, counter) {return acc+counter});
var s = function (x) {console.log("value: ", x)};
var disposable = result$.subscribe(s)
Moving the mouse should show a value increase by 1, and pressing a key should restart the counter with the value of the key pressed.
As a general case when creating operators it is generally easiest to use the Observable.create method which essentially defines how your Observable should behave when it is subscribed to or just wrap an existing set of operators ala share.
When you get more into performance there are some other considerations (Observable.create is not terribly efficient at scale) and you could look into creating a custom Observable like map.
For your case I would recommend the former for right now. I would think of your problem really as several independent streams that we would like to flatten into a single stream. Each new stream will start when reset is triggered. This is really sounding an awful lot like flatMap to me:
Rx.Observable.prototype.scanWithReset = function ($reset, accum, seed) {
var source = this;
//Creates a new Observable
return Rx.Observable.create(function (observer) {
//We will be reusing this source so we want to make sure it is shared
var p = source.publish();
var r = $reset
//Make sure the seed is added first
.startWith(seed)
//This will switch to a new sequence with the associated value
//every time $reset fires
.flatMapLatest(function (resetValue) {
//Perform the scan with the latest value
return source.scan(accum, resetValue);
});
//Make sure every thing gets cleaned up
return new Rx.CompositeDisposable(
r.subscribe(observer),
//We are ready to start receiving from our source
p.connect());
});
}

Variable capture by closures in Swift and inout parameters

I noticed that when a variable is captured by a closure in Swift, the closure can actually modify the value. This seems crazy to me and an excellent way of getting horrendous bugs, specially when the same var is captured by several closures.
var capture = "Hello captured"
func g(){
// this shouldn't be possible!
capture = capture + "!"
}
g()
capture
On the other hand, there's the inout parameters, which allow a function or closure to modify its parameters.
What's the need for inout, even captured variables can already be modified with impunity??!!
Just trying to understand the design decisions behind this...
Variables from an outer scope that are captured aren't parameters to the routine, hence their mutablility is inherited from context. By default actual parameters to a routine are constant (let) and hence can't be modified locally (and their value isn't returned)
Also note that your example isn't really capturing capture since it's a global variable.
var global = "Global"
func function(nonmutable:Int, var mutable:Int, inout returnable:Int) -> Void {
// global can be modified here because it's a global (not captured!)
global = "Global 2"
// nomutable can't be modified
// nonmutable = 3
// mutable can be modified, but it's caller won't see the change
mutable = 4
// returnable can be modified, and it's caller sees the change
returnable = 5
}
var nonmutable = 1
var mutable = 2
var output = 3
function(nonmutable, mutable, &output)
println("nonmutable = \(nonmutable)")
println("mutable = \(mutable)")
println("output = \(output)")
Also, as you can see, the inout parameter is passed differently so that it's obvious that on return, the value may be different.
David's answer is totally correct, but I thought I'd give an example how capture actually works as well:
func captureMe() -> (String) -> () {
// v~~~ This will get 'captured' by the closure that is returned:
var capturedString = "captured"
return {
// The closure that is returned will print the old value,
// assign a new value to 'capturedString', and then
// print the new value as well:
println("Old value: \(capturedString)")
capturedString = $0
println("New value: \(capturedString)")
}
}
let test1 = captureMe() // Output: Old value: captured
println(test1("altered")) // New value: altered
// But each new time that 'captureMe()' is called, a new instance
// of 'capturedString' is created with the same initial value:
let test2 = captureMe() // Output: Old value: captured
println(test2("altered again...")) // New value: altered again...
// Old value will always start out as "captured" for every
// new function that captureMe() returns.
The upshot of that is that you don't have to worry about the closure altering the captured value - yes, it can alter it, but only for that particular instance of the returned closure. All other instances of the returned closure will get their own, independent copy of the captured value that they, and only they, can alter.
Here are a couple of use cases for closures capturing variables outside their local context, that may help see why this feature is useful:
Suppose you want to filter duplicates out of an array. There’s a filter function that takes a filtering predicate and returns a new array of only entries matching that predicate. But how to pass the state of which entries have already been seen and are thus duplicates? You’d need the predicate to keep state between calls – and you can do this by having the predicate capture a variable that holds that state:
func removeDupes<T: Hashable>(source: [T]) -> [T] {
// “seen” is a dictionary used to track duplicates
var seen: [T:Bool] = [:]
return source.filter { // brace marks the start of a closure expression
// the closure captures the dictionary and updates it
seen.updateValue(true, forKey: $0) == nil
}
}
// prints [1,2,3,4]
removeDupes([1,2,3,1,1,2,4])
It’s true that you could replicate this functionality with a filter function that also took an inout argument – but it would be hard to write something so generic yet flexible as the possibilities with closures. (you could do this kind of filter with reduce instead of filter, since reduce passes state from call to call – but the filter version is probably clearer)
There is a GeneratorOf struct in the standard library that makes it very easy to whip up sequence generators of various kinds. You initialize it with a closure, and that closure can capture variables to use for the state of the generator.
Suppose you want a generator that serves up a random ascending sequence of m numbers from a range 0 to n. Here’s how to do that with GeneratorOf:
import Darwin
func randomGeneratorOf(#n: Int, #from: Int) -> GeneratorOf<Int> {
// state variable to capture in the closure
var select = UInt32(n)
var remaining = UInt32(from)
var i = 0
return GeneratorOf {
while i < from {
if arc4random_uniform(remaining) < select {
--select
--remaining
return i++
}
else {
--remaining
++i
}
}
// returning nil marks the end of the sequence
return nil
}
}
var g = randomGeneratorOf(n: 5, from: 20)
// prints 5 random numbers in 0..<20
println(",".join(map(g,toString)))
Again, it’s possible to do this kind of thing without closures – in languages without them, you’d probably have a generator protocol/interface and create an object that held state and had a method that served up values. But closure expressions allow a flexible way to do this with minimal boiler plate.
A closure being able to modify the captured variable in the outer scope is pretty common across languages. This is the default behavior in C#, JavaScript, Perl, PHP, Ruby, Common Lisp, Scheme, Smalltalk, and many others. This is also the behavior in Objective-C if the outer variable is __block, in Python 3 if the outer variable is nonlocal, in C++ if the outer variable is captured with &

Resources