SuperCollider patterns library: how to get a reference to the synths' nodeIDs? - supercollider

Patterns library question:
How can I get a reference to the Synth that is created by a Pbind?
For instance,
Pbind(
\type, myCustomSynthDef,
\midinote, Pseq([60, 62, 64], inf),
\dur, 0.5
).play
gets me a repeating do-re-mi sequence. If I'd like to change some modulation parameter on the synth that plays 're', how can I get that synth's nodeID into a variable?

To control the "re" synth, you would normally put some extra parameters into the Pbind and then simply use them in the synth, e.g. add
\craziness, Pseq([0, 100, 0], inf)
to your Pdef, and add something in your SynthDef to use it.
If you really really want to know the nodeID (bleh, not pleasant) then you don't use Pattern.play. I guess you could iterate the pattern manually (e.g. using .next) and manually call .play on each Event in that iteration. When you call the Event's .play it returns an Event that has the node ID inside, stored in the id key.

Related

How to get the matched key from ngx translate?

I have a directive that is sending a whole array of possible "fallback" keys to the Translate pipe's transform method (as the second param, "args"), and ngx somehow settles on the one that actually exists. I know the Translate pipe has a 'lastKey' property, but that turns out to not have the right value if a fallback is chosen.
Is there a way to get what path it actually translated?

Transformer-XL: Input and labels for Language Modeling

I'm trying to finetune the pretrained Transformer-XL model transfo-xl-wt103 for a language modeling task. Therfore, I use the model class TransfoXLLMHeadModel.
To iterate over my dataset I use the LMOrderedIterator from the file tokenization_transfo_xl.py which yields a tensor with the data and its target for each batch (and the sequence length).
Let's assume the following data with batch_size = 1 and bptt = 8:
data = tensor([[1,2,3,4,5,6,7,8]])
target = tensor([[2,3,4,5,6,7,8,9]])
mems # from the previous output
My question is: I currently pass this data into the model like this:
output = model(input_ids=data, labels=target, mems=mems)
Is this correct?
I am wondering because the documentation says for the labels parameter:
labels (:obj:torch.LongTensor of shape :obj:(batch_size, sequence_length), optional, defaults to :obj:None):
Labels for language modeling.
Note that the labels are shifted inside the model, i.e. you can set lm_labels = input_ids
So what is it about the parameter lm_labels? I only see labels defined in the forward method.
And when the labels "are shifted" inside the model, does this mean I have to pass data twice (additionally instead of targets) because its shifted inside? But how does the model then know the next token to predict?
I also read through this bug and the fix in this pull request but I don't quite understand how to treat the model now (before vs. after fix)
Thanks in advance for some help!
Edit: Link to issue on Github
That does sound like a typo from another model's convention. You do have to pass data twice, once to input_ids and once to labels (in your case, [1, ... , 8] for both). The model will then attempt to predict [2, ... , 8] from [1, ... , 7]). I am not sure adding something at the beginning of the target tensor would work as that would probably cause size mismatches later down the line.
Passing twice is the default way to do this in transformers; before the aforementioned PR, TransfoXL did not shift labels internally and you had to shift the labels yourself. The PR changed it to be consistent with the library and the documentation, where you have to pass the same data twice.

Is there a drawback in using rxjs for readonly collection manipulation

I need to do a Min and Max operation on a array getting from server side.
I am new to rxjs extensions but those library is actually mean to observe changes on a collection, but in my case its just a ONE time calculation on a collection which is no further changed then until I do a server side refresh of the data.
I just want to use the right tool for the right job, thus I ask is it correct to use rxjs here or is that shooting with bombs on flys?
Or should I rather use a library like https://github.com/ENikS/LINQ
to get the Min/Max value of a collection?
There is a LINQ implementation IxJS that is developed and maintained by the same team that is developing RxJS. This might be the right tool for you.
However, you could go with RxJS as well. When using Rx.Observable.from([1, 2, ...]) the execution is synchronous on subscription.
I would use IxJS however:
// An array of values.. (just creating some random ones here)
const values = [2, 4, 23, 1, 0, 34, 56, 2, 3, 45, 98, 6, 3];
// Create an enumerable from the array
const valEnum = Ix.Enumerable.fromArray(values);
const min = valEnum.min();
const max = valEnum.max();
Working example on jsfiddle.
https://github.com/ENikS/LINQ uses all the latest language features and theoretically much faster than IxJS. Last edit on IxJS is 3 years old. (ECMA-262/6.0/) introduced few very important advancements and speed improvements.
It also has better compliance with standard LINQ API and can operate on any collection implementing iterables, including strings, maps, typed arrays, and etc. IxJS can only query array types.

ODBC: SQLSetConnectAttr actual attribute constant name

At the moment I encountered SQLSetConnectAttrW call with attribute constant equal to either 0 or 1(SQLSetConnectAttrW(0x1231231, 0, 0, -6)). And so I cannot distinguish what is actual SQL_ATTR_* define name so that I can refer to it further. I tried to look through ODBC header files, but ended up with no success in finding what this could be. So my question is what are these constants names?
PS: ADO internally makes this sort of call and I have to figure out what is this being made for.
Best regards, Alexander Chernyaev.
If you are seeing SQLSetConnectAttr(0xNNNNNNNN, 0, 0, -6) then the first argument is a connection handle (a pointer), the second is the attribute to set (I'm not aware of an attribute of value 0), the 3rd is irrelevant and the 4th is SQL_IS_INTEGER implying it is a numeric attribute. Are you sure it is attempting to set attribute 0? Where did you get that information from?
These two attributes are SQL_ATTR_MAX_ROWS and SQL_ATTR_QUERY_TIMEOUT and it's ok to pass them to connection handle as #bohica stated before.

Problem converting a Matrix to Data Frame in R (R thinks all numeric types are factors)

I am passing data from C# to R over a COM interface. When the data arrives in R it is housed in a 'Matrix'. Some of the functions that I use require that the data be inside a 'DataFrame' instead. I convert the data structure using
newDataFrame <- as.data.frame(oldMatrix)
The table of data reaches R just fine, once I make the conversion to the DataFrame however, it assumes all of my numeric data are factors!
So it turns: {34, 46, 90, 54, 69, 54} into {1, 2, 3, 4, 5, 4}
My data table DOES have factors in it though, so I just can't force the whole thing to be numeric. Is there any way around this? Note: I can't export the data as a CSV onto the filesystem and read it into R manually.
On a side note, the function I am using that requires a DataFrame is the 'Hmisc' package using
hist.data.frame(dataFrame)
this produces a frequency histogram for every column of data in the DataFram and arranges them in all in a grid pattern (quite nifty)!
Thanks!
-Dave
I think you have mis-diagnosed the problem - all columns in a matrix must be of the same type, so this is likely to be where the problem arises, not the conversion to a data frame.
I've had this problem before. You need to set stringsAsFactors=F when you read the data.
Now, you can convert individual variables/columns to factors (ie, with as.numeric() and the like), without worrying about how the numbers are treated.

Resources