how do has_field() methods relate to default values in protobuf? - protocol-buffers

I'm trying to determine the relationship between default values and the has_foo() methods that are declared in various programmatic interfaces. In particular, I'm trying to determine under what circumstances (if any) you can "tell the difference" between a field explicitly set to the default value, and an unset value.
If I explicitly set a field (e.g. "Bar.foo") to its default value (e.g., zero), then is Bar::has_foo() guaranteed return true for that data structure? (This appears to be true for the C++ generated code, from a quick inspection, but that doesn't mean it's guaranteed.) If this is true, then it's possible to distinguish between an explicitly set default value and an unset prior to serialization.
If I explicitly set a field to its default value (e.g., zero), and then serialize that object and send it over the wire, will the value be sent or not? If it is not, then clearly any code that receives this object can't distinguish between an explicitly set default value and an unset value. I.e., it won't be possible to distinguish these two cases after serialization -- Bar::has_foo() will return false in both cases.
If it's not possible to tell the difference, what is the recommended technique for encoding a protobuf field if I want to encode a "nullable" optional value? A couple options come to mind, but neither seem great: (a) add an extra boolean field that records whether the field is set or not, or (b) use a "repeated" field even though I semantically want an optional field -- this way I can tell the difference between no value (length-zero list) or a set value (length-one list).

The following applies for 'proto2' syntax, not 'proto3' :
The notion of a field being set or not is a core feature of Protobuf. If you set a field to a value (any value), then the corresponding has_xxx method must return true, otherwise you have a bug in the API.
If you do not set a field and then serialize the message, no value is sent for that field. The receiving side will parse the message, discover which values where included, and set the corresponding "has_xxx" values.
Exactly how this is implemented in the wire-format is documented here: http://code.google.com/apis/protocolbuffers/docs/encoding.html. The short version is that message are encoded as a sequence of key-value pairs, and only fields which are explicitly set are included in the encoded message.
Default values only come into play when you attempt to read an unset field.

Related

How to get the matched key from ngx translate?

I have a directive that is sending a whole array of possible "fallback" keys to the Translate pipe's transform method (as the second param, "args"), and ngx somehow settles on the one that actually exists. I know the Translate pipe has a 'lastKey' property, but that turns out to not have the right value if a fallback is chosen.
Is there a way to get what path it actually translated?

What is the point of google.protobuf.StringValue?

I've recently encountered all sorts of wrappers in Google's protobuf package. I'm struggling to imagine the use case. Can anyone shed the light: what problem were these intended to solve?
Here's one of the documentation links: https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/well-known-types/string-value (it says nothing about what can this be used for).
One thing that will be different in behavior between this, and simple string type is that this field will be written less efficiently (a couple extra bytes, plus a redundant memory allocation). For other wrappers, the story is even worse, since the repeated variants of those fields will be written inefficiently (official Google's Protobuf serializer doesn't support packed encoding for non-numeric types).
Neither seems to be desirable. So, what's this all about?
There's a few reasons, mostly to do with where these are used - see struct.proto.
StringValue can be null, string often can't be in a language interfacing with protobufs. e.g. in Go strings are always set; the "zero value" for a string is "", the empty string, so it's impossible to distinguish between "this value is intentionally set to empty string" and "there was no value present". StringValue can be null and so solves this problem. It's especially important when they're used in a StructValue, which may represent arbitrary JSON: to do so it needs to distinguish between a JSON key which was set to empty string (StringValue with an empty string) or a JSON key which wasn't set at all (null StringValue).
Also if you look at struct.proto, you'll see that these aren't fully fledged message types in the proto - they're all generated from message Value, which has a oneof kind { number_value, string_value, bool_value... etc. By using a oneof struct.proto can represent a variety of different values in one field. Again this makes sense considering what struct.proto is designed to handle - arbitrary JSON - you don't know what type of value a given JSON key has ahead of time.
In addition to George's answer, you can't use a Protobuf primitive as the parameter or return value of a gRPC procedure.

Difference between `normalize` and `parse` callbacks in redux-form

The current redux-form documentation (version 6.5.0 at the time of this writing) mentions 2 callbacks for the Field object: normalize and parse.
Both descriptions sound pretty similar: They take the value entered by the user in an input field and transform it to a value stored in redux.
What's the difference between these 2 callbacks?
Essentially the two functions do exactly the same thing, i.e. take the value a user has input to the Field and transform it before it's stored in the redux store.
The differences lie in the flavor of these functions and the order in which they are called:
parse parses the string input value should convert it to the type you want to be stored the redux store, for example you parse a date string from a datepicker into a Date object
normalize is meant enforce certain formatting of input values in the redux store, for example ensuring that phone numbers are stored in a cohesive format
When it comes to the order in which these methods are called in the redux-form value lifecycle: parse is called before normalize, which means normalize is called with the parsed input value.
So in short, use parse to convert user input (usually in string form) to a type that suits your needs. Use normalize to enforce a specific input format on the user.
This is what the Value Lifecycle Hooks page tries to explain.

Why are kCIAttribute(Max|Min) and kCIAttributeSlider(Max|Min) sometimes different values

In CoreImage a CIFilter has both a set of Max/Min values and a set of SliderMax/Min values.
The documentation for the Max/Min says "The maximum/minimum value for a filter parameter" and the SliderMax/Min says "The maximum/minimum value, specified as a floating-point value, to use for a slider that controls input values for a filter parameter."
I'm wondering why these might be different values, as they are, for example, for the inputAngle parameter of CIHueAdjust, where max/min are 0/0 but sliderMax/Min is 3.14/-3.14?
And also what is the use of having the max/min values at 0/0 like they are for most of the filters?
I would wager that a value of 0 means there is no max/min, that any value representable by the datatype is valid for the filter.
As for why there's a separate slider value, it's because what you present to the user is often different than what's accepted. For example, the CIHueAdjust may accept any value for the actual adjustment, but a slider presented to the user has no reason to go outside the range of -3.14..3.14 (because anything outside this range is equivalent to a value inside the range).

Dynamics AX Mandatory Enum field cannot be set correctly through UI

Can anyone explain the following behaviour to me?
When a field type in an AX Table is set to an Enum, you can select any of the Enum values as a value for the field.
But if you make the field Mandatory, you can no longer select the first Enum value in the list through the user interface.
Obviously this can be worked around by not making the field Mandatory. I am looking for an explanation of this bizarre behaviour.
AX does not have a null value concept. Instead the following values are considered "not entered" by defintion:
string: blank
int and int64: 0 (zero)
enum: 0 (typically the first value)
date: 01\01\1900 (displays as blank)
For new base enums make a blank zero enum value (by convention name it None). This will make the use of mandatory fields possible for this enum type.
Also have a look on this: Mark mandatory fields on form, if not filled with valid value
You're saying "if you make the field Mandatory, you can no longer select the first Enum value in the list through the user interface" - this is exactly what the Mandatory property does for enums: prevents you from using a zero value. E.g. if you make NoYesId mandatory you'll be able to enter only Yes because No would no longer be allowed - why would you need it on the form then?
Please also note that from a user perspective it isn't necessarily clear what enum value is zero, so if it didn't work the way it works, understanding what value is not allowed when the enum is mandatory could be tricky.

Resources