I'd like to create a Base16 encoder and decoder for Lion's new Security.framework to complement kSecBase32Encoding and kSecBase64Encoding. Apple's documentation shows how to write a custom transform (Caesar cipher) using SecTransformRegister. As far as I can tell, custom transforms registered this way have to operate symmetrically on the data and can't be used to encode and decode data differently. Does anyone know if writing a custom encoder/decoder is possible, and if so, how?
I don't see a way to tie custom encoders into SecEncodeTransformCreate(), which is what kSecBase32Encoding and the others are based on. But it's easy to create a transform that accepts a "encoding" bool and makes use of that to decide whether to encode or decode. In the CaesarTransform example, they attach an attribute called key with SecTransformSetAttribute(). You'd do the same thing, but with a bool encode.
And of course you could just create an encoding transform and a decoding transform.
Related
I have some old archives that are basically stored preferences created with NSArchiver. I want to be able to decode them with NSKeyedUnarchiver since the NSArchiver/NSUnarchiver are deprecated in favor of their keyed counterparts.
Is there any way to make this work?
You cannot decode an NSArchiver archive with NSKeyedUnarchiver. The NSArchiver format is deprecated, not just the classes. If you have data in the old format, you may have to use a deprecated class to decode it. The point is that you would then re-encode the data using NSKeyedArchiver (or one of the other options like Codable in Swift) and store it in a modern format.
When using the Windows-Machine-Learning library, the input and output to the onnx models is often either TensorFloat or ImageFeatureValue format.
My question: What is the difference between these? It seems like I am able to change the form of the input in the automatically created model.cs file after onnx import (for body pose detection) from TensorFloat to ImageFeatureValue and the code still runs. This makes it e.g. easier to work with videoframes, since I can then create my input via ImageFeatureValue.CreateFromVideoFrame(frame).
Is there a reason why this might lead to problems and what are the differences between these when using videoframes as input, I don't see it from the documentation? Or why does the model.cs script create a TensorFloat instead of an ImageFeatureValue in the first place anyway if the input is a videoframe?
Found the answer here.
If Windows ML does not support your model's color format or pixel range, then you can implement conversions and tensorization. You'll create an NCHW four-dimensional tensor for 32-bit floats for your input value. See the Custom Tensorization Sample for an example of how to do this.
I'm looking for a string representation of arbitrary fields inside protocol buffer messages. Is there any library that implements this? I've looked at using field masks, however they don't have a strong support for repeated fields.
Protocol buffer message and field descriptors provide field access by name. This allows you to find a particular field using a path and to erase it, if that's what you are asking for (if not, I'd recommend to expand the question to include an example for what you'd like to do).
One corresponding Java method is getDescriptorForType (the return type is a message descriptor, where you'll find field descriptors).
There is a similar descriptor API for C++ (in Java, you could theoretically also use reflection).
This API is not available in light mode.
Is there a data structure within LiveCode that can be used as a "holder" for associated data, letting me handle it collectively? I come from a Java / Javascript / C background so I am looking for a Class or Struct sort of data structure.
I've found examples of Groups, which seem to have some of this functionality, but it feels a bit like I'm bending the language to meet my needs.
As a specific example, suppose I had an image field on my screen that would randomly display an image and, when pressed, play an associated sound clip. I'd expect to create a list of "structures" that contained the path to the image and the path to the associated sound clip, and use that data to populate the image field and to decide what sound clip to play.
Would a Group be the correct structure to use in this case? Or am I approaching this in a way that isn't really fitting with the way LiveCode works?
It takes a little getting used to, but the xTalk world is much simpler and more open than any ordinary procedural language. So much of what you once had to manage is no longer required.
So when splash21 said that you could store all your image and sound references in a custom property, he was really saying that the LiveCode environment contains intrinsic, high level functionality that makes these sorts of things instantly accessible, and the only thing required of you is to call for them, and they simply work.
The only way to appreciate this is to make a few simple programs, to really see what is possible. Make your application. Everything you mentioned can be accomplished with perhaps a dozen lines of code in a single handler. I recommend that you join the LiveCode use list and forums. The community is vibrant and eager to help, frequently with full blown solutions to specific problems, but more importantly, as guides and mentors to new users
Craig Newman
Arrays in LiveCode are actually associative arrays (like hash maps). A key is associated with a value. The value might be as well an array.
Chapter 5.5.7 of the User's Guide says
Array elements may contain nested or sub-elements, making them multi-dimensional.
This type of array is ideal for processing hierarchical data structures such as trees or
XML. To access a sub-element, simply declare it using an additional set of square
brackets.
put "ABC" into myVariable["myKeyName"][“aChildElement”]
see also
How to store pictures in a stack?
Dave- I'm hoping to get a struct-like container implemented in the near future. Meanwhile you can, as splash21 mentioned, use custom properties (or better yet, custom property sets) to do what you want. This will give you a pseudo-struct for each object and you can implement the file and sound specifications into the properties. And if you use that in conjunction with a behavior object you'll end up very close to a real inheritable class formation.
DMO seems to be used for replacing DirectShow transform filter. Some documents say there could be a DMO without input streams. But how is it supposed to work? If there is no input stream, in function IMediaObject::CheckInputType what should be written?
You can implement an inputless DMO, e.g. let's suppose the DMO generates output internally. Noone will call CheckInputType because no inputs exist, this is fine (your CheckInputType body will be empty and e.g. returning E_NOTIMPL).
However you should rather step back and explain what it is for. No, DMOs are not a replacement for DirectShow filters. DMOs can be mapped into DirectShow filter space, through DMO Wrapper Filter, however the latter does not support DMOs with no inputs so your DMO is going to be useless for DirectShow pipeline.
To create a custom DirectShow source, you need to implement full filter.