Is there a mapping between NSFont.Weight and the integer values? - appkit

The NSFont API has two different ways to specify a weight. First, there is a struct NSFont.Weight which contains a floating point rawValue property. It's used in functions like:
NSFont.systemFont(ofSize fontSize: CGFloat, weight: NSFont.Weight) -> NSFont
Then there is another function for getting other fonts, which uses an integer.
NSFontManager.font(withFamily family: String, traits: NSFontTraitMask,
weight: Int, size: CGFloat) -> NSFont?
The documentation for that function says that integers are not simply rounded versions of the floats. The weight can be in the range 0-15, with 5 being a normal weight. But:
NSFont.Weight.regular.rawValue == 0.0
NSFont.Weight.light.rawValue == -0.4000000059604645
NSFont.Weight.black.rawValue == 0.6200000047683716
I don't see any mention of how to convert between the NSFont.Weight and the integers. Maybe it's just some odd legacy API that they never clean up. Am I missing something?

Related

Sphinx is not formatting overloaded Python function parameters correctly

I am using doxygen + Sphinx to generate documentation for some Python bindings I have written.
The Python bindings are written using pybind11.
When I write my documentation string for a non overloaded function, it formats properly.
Here is an example:
// Pybind11 python bindings.
// Module and class defined above...
.def("get_similarity", [](SDK &sdk, const Faceprint& faceprint1, const Faceprint& faceprint2) {
float similarity;
float probability;
ErrorCode status = sdk.getSimilarity(faceprint1, faceprint2, probability, similarity);
return std::make_tuple(status, probability, similarity);
},
R"mydelimiter(
Compute the similarity of the given feature vectors.
:param feature_vector_1: the first Faceprint to be compared.
:param feature_vector_2: the second Faceprint to be compared.
:return: The see :class:`ERRORCODE`, match probability and similairty score, in that order. The match probability is the probability that the two faces feature vectors are a match, while the similairty is the computed similairty score.
)mydelimiter",
py::arg("feature_vector_1"), py::arg("feature_vector_2"))
This is what it looks like:
When I write documentation for an overloaded function, the formatting is off. Here is an example:
.def("set_image", [](SDK &sdk, py::array_t<uint8_t> buffer, uint16_t width, uint16_t height, ColorCode code) {
py::buffer_info info = buffer.request();
ErrorCode status =sdk.setImage(static_cast<uint8_t*>(info.ptr), width, height, code);
return status;
},
R"mydelimiter(
Load an image from the given pixel array in memory.
Note, it is highly encouraged to check the return value from setImage before proceeding.
If the license is invalid, the ``INVALID_LICENSE`` error will be returned.
:param pixel_array: decoded pixel array.
:param width: the image width in pixels.
:param height: the image height in pixels.
:param color_code: pixel array color code, see :class:`COLORCODE`
:return: Error code, see :class:`ERRORCODE`
)mydelimiter",
py::arg("pixel_array"), py::arg("width"), py::arg("height"), py::arg("color_code"))
// Other overrides of set_image below...
The formatting is all off for this, in particular the way the Parameters and Returns are displayed. This is what it looks like.
How can I get the set_image docs to look like the get_similarity docs?
I'm not sure how to properly solve the problem, but here is a hack I used to make them appear to be the same. Basically, I hard coded the formatting:
R"mydelimiter(
Load an image from the given pixel array in memory.
Note, it is highly encouraged to check the return value from setImage before proceeding.
If the license is invalid, the ``INVALID_LICENSE`` error will be returned.
:Parameters:
- **pixel_array** - decoded pixel array.
- **width** - the image width in pixels.
- **height** - the image height in pixels.
- **color_code** - pixel array color code, see :class:`COLORCODE`
:Returns:
Error code, see :class:`ERRORCODE`
)mydelimiter"

Is it possible to model LLVM-like inheritance hierarchy in Rust using enums?

By saying LLVM-like inheritance hierarchy, I mean the way to obtain runtime polymorphism described in this documentation: https://llvm.org/docs/HowToSetUpLLVMStyleRTTI.html.
It is easy to implement the same feature in Rust using enums, like:
enum Shape {
Square(Square),
Circle(Circle)
}
enum Square {
SquareA(SquareAData),
SquareB(SquareBData),
}
enum Circle {
CircleA(CircleAData),
CircleB(CircleBData),
}
// assume ***Data can be arbitrarily complex
However, the memory layout is inevitably different from LLVM-like inheritance hierarchy, which uses a single integer field to record the discriminant of the type. Though current rustc already has a lot of optimizations on the size of enums, there will still be two integer field to record the discriminant in a Shape object in the above example.
I have tried some ways without success, in which the closet to LLVM-like inheritance hierarchy in my mind is to enable nightly feature arbitrary_enum_discriminant and assign each variant of the enum an discriminant:
#![feature(arbitrary_enum_discriminant)]
enum Shape {
Square(Square),
Circle(Circle)
}
#[repr(usize)]
enum Square {
SquareA(SquareAData) = 0,
SquareB(SquareBData) = 1,
}
#[repr(usize)]
enum Circle {
CircleA(CircleAData) = 2,
CircleB(CircleBData) = 3,
}
It is perfectly possible for Shape to go without its own discriminant, since its two variants have non-intersecting discriminant sets. However, rustc still assigns a integer discriminant to it, making it larger than Square or Circle. (rustc version: rustc 1.44.0-nightly (f509b26a7 2020-03-18))
So my question is: is it on earth possible in Rust to use enums to model LLVM-like inheritance hierarchy, with only a single integer discriminant in the top level "class"?

Golang image ColorModel()

I am teaching myself Go. I decided to try some computer vision stuff. First things first I was going to make an image histogram. I'm trying to get the color model so I know the intensity range of the pixels. When I print image.ColorModel() it gives me a cryptic hexidecimal output:
color model: &{0x492f70}
I couldn't find any explanation in the docs. I was expecting some sort of enum type thing that would map to a color model like, NRGBA, RGBA, etc.
What does that hexidecimal mean? What does the ampersand curly braces &{...} mean? Also what is the "N" in NRGBA I can't find anything about it.
To extend putu's answer, comparing the returned color model to the "prepared" models of the image package only works if one of those models is used, else all comparison will result in false. Also it is quite inconvenient to list and compare to all possible models.
Instead to find out a talkative form of the color model, we may use this little trick: try to convert any color using the color model of the image. A concrete color model converts all color values (implementations) to the color type / implementation used by the image. Printing the type of the resulting color will tell you what you are looking for.
Example:
col := color.RGBA{} // This is the "any" color we convert
var img image.Image
img = &image.NRGBA{}
fmt.Printf("%T\n", img.ColorModel().Convert(col))
img = &image.Gray16{}
fmt.Printf("%T\n", img.ColorModel().Convert(col))
img = &image.NYCbCrA{}
fmt.Printf("%T\n", img.ColorModel().Convert(col))
img = &image.Paletted{}
fmt.Printf("%T\n", img.ColorModel().Convert(col))
Output (try it on the Go Playground):
color.NRGBA
color.Gray16
color.NYCbCrA
<nil>
As can be seen, an image of type *image.NRGBA models colors using color.NRGBA, an image of type *image.Gray16 models colors using color.Gray16 etc. As a last example I used *image.Paletted, where the result was nil, because the image's palette was empty.
To quickly fix the nil palette, let's provide an initial palette:
img = &image.Paletted{Palette: []color.Color{color.Gray16{}}}
fmt.Printf("%T\n", img.ColorModel().Convert(col))
Now the output will be (try this on the Go Playground):
color.Gray16
An Image is declared as an interface having the following method sets:
type Image interface {
ColorModel() color.Model
Bounds() Rectangle
At(x, y int) color.Color
}
Method ColorModel() returns an interface named color.Model which is declared as:
type Model interface {
Convert(c Color) Color
}
Since the ColorModel returns an interface, you can't dereference it using *. What you see as &{0x492f70} is the underlying data structure which implements color.Model interface, and in this case, it is a pointer which points to address 0x492f70. Usually, it doesn't matter how ColorModel's underlying data is implemented (any type is valid as long as it has Convert(c Color) Color method), but if you're curious, the models for several standard color types are implemented as a pointer to unexported struct declared as:
type modelFunc struct {
f func(Color) Color
}
What you got when you print the ColorModel is a pointer to this struct. Try print it using fmt.Printf("%+v\n", img.ColorModel()), you will see an output likes &{f:0x492f70}, in which f denotes the field name in the above struct.
In the documentation, there are several models for the standard color types, e.g. color.NRGBAModel, color.GrayModel, etc. If you want to detect the image's color model, you can compare it to these standard models, e.g.
if img.ColorModel() == color.RGBAModel {
//32-bit RGBA color, each R,G,B, A component requires 8-bits
} else if img.ColorModel() == color.GrayModel {
//8-bit grayscale
}
//...
What does that hexidecimal mean?
Memory pointer address of the variable you're printing.
What does the ampersand curly braces &{...} mean?
Refer to this SO Post
what is the "N" in NRGBA
NRGBA represents a non-alpha-premultiplied 32-bit color. Refer to doc.

pentaho CDE conditional formatting of bubble chart

I have used CCC Heat Grid in CDE to create a bubble chart with bubbles of different colors. My data set has only 6 values: (1, 1.1, 2, 2.1, 3, 3.1). I have sizeRole property to "value" so that the size of the bubble varies based on the magnitude of these six values. Alternative, I could have set colorRole property to "value". I have set three colors: green (1), yellow (2) and red (3).
Now, what I want to have 1 as green, 2 as yellow and 3 as red; and biggest constant size for 1.1, 2.1 and 3.1. The values 1.1, 2.1 and 3.1 represent alarms in my data set, so I want them to be of biggest size bubble or some other differentiating visual element.
I tried the following in pre-execution but no luck
function changeBubbles(){
var cccOptions = this.chartDefinition;
// For changing extension points, a little more work is required:
var eps = Dashboards.propertiesArrayToObject(cccOptions.extensionPoints);
// add extension points:
eps.bar_shape = function getShape(){
var val = this.scene.vars.value.value;
if(val == 1.1 || val == 2.1 || val == 3.1){
return 'cross';
}
else {}
};
// Serialize back eps into cccOptions
cccOptions.extensionPoints = Dashboards.objectToPropertiesArray(eps);
}
How can we achieve this?
I hope the answer is still relevant, given that this is a late response.
To use bubbles you should have useShapes: true.
You can set a different constant shape by using the shape option. For example, shape: "cross".
To have the bubble size be constant, you should set the "sizeRole" to null: sizeRole: null. Bubbles will take all of the available "cell" size.
Then, the "value" column should be picked up by the "colorRole", but to be explicit, specify: colorRole: "value".
By default, because the color role will be bound to a continuous dimension ("value"), the color scale will be continuous as well.
To make it a discrete scale, change the "value" dimension to be discrete:
dimensions: {
"value": {isDiscrete: true}
}
Finally, to ensure that the colors are mapped to the desired values, specify the "colorMap" option:
colorMap: {
"1": "green",
"2": "yellow",
"3": "red"
}
That's it. I hope this just works :-)

Export a Uint8 array as an image using Images in Julia

I recently asked how to convert Float32 or Uint8 arrays into images in the Images package. I got an answer for the Float32 case, but am still having trouble figuring out how to save a Uint8 array.
As an example, let's create a random Uint8 array using the traditional Matlab scheme where the dimensions are (m,n,3):
array = rand(Uint8, 50, 50, 3);
img = convert(Image, array);
Using the same approach as works for the Float32 case,
imwrite(img, "out.png")
fails with message
ERROR: method 'mapinfo' has no method matching mapinfo(::Type{ImageMagick}, ::Image{Uint8, 3, Image{Uint8, 3, Array{Uint8, 3}}}).
I checked the documentation, and it says
If data encodes color information along one of the dimensions of the array (as opposed to using a ColorValue array, from the Color.jl package), be sure to specify the "colordim" and "colorspace" in properties.
However, inspecting the img object previously created shows that it has colordim = 3 and colorspace = RGB already set up, so this can't be the problem.
I then searched the documentation for all instances of MapInfo. In core.md there is one occurrence:
scalei: a property that controls default contrast scaling upon display. This should be a MapInfo value, to be used for setting the contrast upon display. In the absence of this property, the range 0 to 1 will be used.
But there was no information on what exactly a MapInfo object is, so I looked further, and in function_reference.md it says:
Here is how to directly construct the major concrete MapInfo types:
MapNone(T), indicating that the only form of scaling is conversion to type T. This is not very safe, as values "wrap around": for example, converting 258 to a Uint8 results in 0x02, which would look dimmer than 255 = 0xff.
...
and some other examples. So I tried to specify scalei = MapNone(Uint8) as follows:
img2 = Image(img, colordim = 3, colorspace = "RGB", scalei = MapNone(Uint8));
imwrite(img, "out.png")
but got the same error again.
How do you encode Uint8 image data using Images in Julia?
You can convert back and forth between arrays of primitive types such as UInt8 and arrays of color types. These conversions are achieved in a unified way via two functions: colorview and channelview.
Example
Convert array of UInt8 to array of RGB:
arr = rand(UInt8, 3, 50, 50)
img = colorview(RGB, arr / 255)
Convert back to channel view:
channelview(img)
Notes
In this example the RGB color type requires that the entries of the array live in [0,1] as floating point. I manually converted UInt8 to Float64 using an explicit division by 255. There is probably a more generic way of achieving this result with reinterpret or some other function in Images.jl
The colorview and channelview functions assume that the channel dimension is the first dimension of the array. You can use permutedims in case your channels live in a different dimension, or use some function in Images.jl (maybe reinterpretc?) to do it efficiently without memory copies.

Resources