does GetPropertyNames() preserve property order - v8

If we have this javascript code:
var obj =
{
b:{propb: 10},
d: {propd: 15},
c: {propc: 15},
a: {propa: 5}
}
Does v8::Object::GetPropertyNames() guarantee that the property names will be returned in the same order as they were defined above?
I did a test, and the order is preserved, but I want to know if this is guaranteed.

Specification 5.1 of ECMA-262: "The mechanics and order of enumerating the properties [...] is not specified."
For v8 the order of enumerating the properties is currently also unspecified. It might work in some cases, but better don't rely on it.
Chromium ticket about v8 ordering: http://code.google.com/p/chromium/issues/detail?id=21901

Related

can gRPC message type set to strictly not null? [duplicate]

I'm writing my first API endpoint in GoLang using GRPC/proto-buffers. I'm rather new to GoLang.
Below is the file I'm writing for my test case(s)
package my_package
import (
"context"
"testing"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/structpb"
"github.com/MyTeam/myproject/cmd/eventstream/setup"
v1handler "github.com/MyTeam/myproject/internal/handlers/myproject/v1"
v1interface "github.com/MyTeam/myproject/proto/.gen/go/myteam/myproject/v1"
)
func TestEndpoint(t *testing.T) {
conf := &setup.Config{}
// Initialize our API handlers
myhandler := v1handler.New(&v1handler.Config{})
t.Run("Success", func(t *testing.T) {
res, err := myhandler.Endpoint(context.Background(), &v1interface.EndpointRequest{
A: "S",
B: &structpb.Struct{
Fields: map[string]*structpb.Value{
"T": &structpb.Value{
Kind: &structpb.Value_StringValue{
StringValue: "U",
},
},
"V": &structpb.Value{
Kind: &structpb.Value_StringValue{
StringValue: "W",
},
},
},
},
C: &timestamppb.Timestamp{Seconds: 1590179525, Nanos: 0},
})
require.Nil(t, err)
// Assert we got what we want.
require.Equal(t, "Ok", res.Text)
})
}
This is how the EndpointRequest object is defined in the v1.go file included above:
// An v1 interface Endpoint Request object.
message EndpointRequest {
// a is something.
string a = 1 [(validate.rules).string.min_len = 1];
// b can be a complex object.
google.protobuf.Struct b = 2;
// c is a timestamp.
google.protobuf.Timestamp c = 3;
}
The test-case above seems to work fine.
I put validation rule in place that effectively makes argument a mandatory because it requires that a is a string of at least one. So if you omit a, the endpoint returns a 400.
But now I want to ensure that the endpoint returns 400 if c or b are omitted. How can I do that? In Protobufs 3, they got rid of the required keyword. So how can I check if a non-string argument was passed in and react accordingly?
Required fields were removed in proto3. Here is github issue where you can read detailed explanation why that was done. Here is excerpt:
We dropped required fields in proto3 because required fields are generally considered harmful and violating protobuf's compatibility semantics. The whole idea of using protobuf is that it allows you to add/remove fields from your protocol definition while still being fully forward/backward compatible with newer/older binaries. Required fields break this though. You can never safely add a required field to a .proto definition, nor can you safely remove an existing required field because both of these actions break wire compatibility
IMO, that was questionable decision and obviously i'm not alone, who's thinking that. Final decision should have been left to developer.
The short version: you can't.
required was removed mostly because it made changes backwards incompatible. Attempting to re-implement it using validation options is not quite as drastic (changes are easier), but will run into shortcomings as you can see.
Instead, keep the validation out of the proto definition and move it into the application itself. Anytime you receive a message, you should be checking its contents anyway (this was also true when required was a thing). It is a rare case that the simple validation provided by options or required is sufficient.

Are there any programming pitfalls of using a map with an empty interface as the KEY

Are there any programming pitfalls of using maps in this manner:
type Set struct {
theMap map[interface{}]struct{}
}
StringSet := NewSet("abc", "pqr")
IntSet := NewSet(1, 2)
DateSet := NewSet(time.Date(2021, 2, 15, 0, 0, 0, 0, time.UTC))
Just to be clear, I know what I'm doing is probably against the spirit of several 'best practices', but that isn't my question here. I'm specifically thinking of programming issues like memory issues due to different element sizes, increased chance of hash collisions, performance degradation due to increased type assertion, etc.
Some more info:
I need to create some 'sets' of various datatypes in my application. I see in Essential Go that the best way to use sets is to use a map with an empty struct as the value.
However, in the absence of generics, I would either need to make a new type of Set for each type of data/element which I wish to store in my sets:
type StringSet struct {
stringMap map[string]struct{}
}
type DateSet struct {
dateMap map[time.Time]struct{}
}
type IntSet struct {
intMap map[int]struct{}
}
...or, use the empty interface as the key of a hashmap:
type Set struct {
theMap map[interface{}]struct{}
}
The 2nd option works very well (you can find my complete code here), but I'm worried that I'm overlooking something obvious and will run into problems later.
Thanks for your help.

How to make Protobuf 3 fields mandatory?

I'm writing my first API endpoint in GoLang using GRPC/proto-buffers. I'm rather new to GoLang.
Below is the file I'm writing for my test case(s)
package my_package
import (
"context"
"testing"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/structpb"
"github.com/MyTeam/myproject/cmd/eventstream/setup"
v1handler "github.com/MyTeam/myproject/internal/handlers/myproject/v1"
v1interface "github.com/MyTeam/myproject/proto/.gen/go/myteam/myproject/v1"
)
func TestEndpoint(t *testing.T) {
conf := &setup.Config{}
// Initialize our API handlers
myhandler := v1handler.New(&v1handler.Config{})
t.Run("Success", func(t *testing.T) {
res, err := myhandler.Endpoint(context.Background(), &v1interface.EndpointRequest{
A: "S",
B: &structpb.Struct{
Fields: map[string]*structpb.Value{
"T": &structpb.Value{
Kind: &structpb.Value_StringValue{
StringValue: "U",
},
},
"V": &structpb.Value{
Kind: &structpb.Value_StringValue{
StringValue: "W",
},
},
},
},
C: &timestamppb.Timestamp{Seconds: 1590179525, Nanos: 0},
})
require.Nil(t, err)
// Assert we got what we want.
require.Equal(t, "Ok", res.Text)
})
}
This is how the EndpointRequest object is defined in the v1.go file included above:
// An v1 interface Endpoint Request object.
message EndpointRequest {
// a is something.
string a = 1 [(validate.rules).string.min_len = 1];
// b can be a complex object.
google.protobuf.Struct b = 2;
// c is a timestamp.
google.protobuf.Timestamp c = 3;
}
The test-case above seems to work fine.
I put validation rule in place that effectively makes argument a mandatory because it requires that a is a string of at least one. So if you omit a, the endpoint returns a 400.
But now I want to ensure that the endpoint returns 400 if c or b are omitted. How can I do that? In Protobufs 3, they got rid of the required keyword. So how can I check if a non-string argument was passed in and react accordingly?
Required fields were removed in proto3. Here is github issue where you can read detailed explanation why that was done. Here is excerpt:
We dropped required fields in proto3 because required fields are generally considered harmful and violating protobuf's compatibility semantics. The whole idea of using protobuf is that it allows you to add/remove fields from your protocol definition while still being fully forward/backward compatible with newer/older binaries. Required fields break this though. You can never safely add a required field to a .proto definition, nor can you safely remove an existing required field because both of these actions break wire compatibility
IMO, that was questionable decision and obviously i'm not alone, who's thinking that. Final decision should have been left to developer.
The short version: you can't.
required was removed mostly because it made changes backwards incompatible. Attempting to re-implement it using validation options is not quite as drastic (changes are easier), but will run into shortcomings as you can see.
Instead, keep the validation out of the proto definition and move it into the application itself. Anytime you receive a message, you should be checking its contents anyway (this was also true when required was a thing). It is a rare case that the simple validation provided by options or required is sufficient.

How to distinguish not provided and empty array in grpc service?

See https://github.com/grpc/grpc-node/issues/1202.
Usually in CRUD operations, the value not provided means do not change that field, and the empty array [] means to clear all items inside that field.
But if you tries to implement CRUD operations and provide them as services via grpc, then the above scenario is hard to implement.
service CRUD {
rpc updateTable(updateRequest) returns updateResponse {}
}
message updateRequest {
repeated string a = 1;
string b = 2;
}
message updateResponse {
boolean success = 1;
}
If you load the package with default options, then the client can't delete items of a by
client.CRUD.updateTable({a: []})
because the argument {a: []} becomes {} when it arrives the server side.
If you load the package with options {arrays: true}, then the field a will be cleared unintentionally while client side only tries to update other fields:
client.CRUD.updateTable({b: 'updated value'})
because the argument {b: 'updated value'} becomes {a: [], b: 'updated value'} when it arrives the server side.
Can anyone share some better ideas regards how to handle these 2 scenarios with grpc-node and proto3?
The protobuf encoding doesn't distinguish between these two cases. Since protobuf is language-agnostic, it doesn't understand the conceptual nuance of "undefined" versus "[]" of Javascript.
You would need to pass additional information inside the proto message in order to distinguish between the two cases.
I would highly suggest reading the design documentations here: https://developers.google.com/protocol-buffers

don't return null -- what to return for search function

All. I am reading a book called and one of the idea is "Don't return null" when writing a method. He suggests either "throw out exception" or "use special case" when the function has to return null.
If the method return type is a list, I know I can return a empty list instead of null. However, what if the return type is a specific object. For example, a method to search database by unique id and return the result. What should I return if the method can not find anything?
I tried to use "throw out exception" but end up writing more coded and additional logic for any place calls the function.
Any suggestion will be appreciated.
If you don't want to return null, then you could use something like an Optional class.
Throwing an exception is an expensive operation, since there is a context switch and a lot of debug information has to be gathered, so you want to avoid throwing them as a way to control process flow (especially if you can handle the situation without throwing exceptions). Returning a null can be perfectly acceptable in order to avoid catching exceptions.
An example of this in action would be a couple of LINQ functions in C#. These methods can return a null:
SingleOrDefault(); // returns a single instance of an object, or null if not found
FirstOrDefault(); // returns the first matching object, or null if not found
This allows you to check for null without trying to figure out control flow using exception handling.
One exception (pardon the pun) that I can think of is using exceptions to communicate across program boundries. If you had, for example, a data access layer in a separate DLL, and you needed to communicate a database failure back to the parent program, sometimes the best way to do that is through exception handling.
Null is defined as non-existence, which seems to fit in perfectly with what you want. If your code returns nothing return nothing, right?
BUT, as always, it depends.
If your code is not meant to return nothing then doing so can cause untold problems. In this case it's definitely better to throw an exception. 1 / null, won't work everywhere.
If non-existence is a perfectly valid return value then why would you want to return an exception? Assuming of course that you've got the code in place to deal with a non-existent value being returned from your query then there's no need to throw an exception at all.
If Null might be a possible value you could return, then you should not return Null if you cannot find anything. For example, in:
[1, 2, 3, Null, 5].find(nonInteger) -> Null
[1, 2, 3, 4].find(nonInteger) -> Null
the .find function cannot return Null to indicate failure, because sometimes it would return Null on success! Instead, you can either change the semantics, or use a special object:
# changed semantics (with extra information returned)
[1, 2, 3, Null, 5].find(nonInteger) -> index=4, value=Null
[1, 2, 3, 4].find(nonInteger) -> index=Null, value=Null
# changed semantics (with wrapper)
[1, 2, 3, Null, 5].find(nonInteger) -> new Maybe(Null)
[1, 2, 3, 4].find(nonInteger) -> new Maybe()
# special object
NoResultFound = new object()
[1, 2, 3, Null, 5].find(nonInteger) -> Null
[1, 2, 3, 4].find(nonInteger) -> NoResultFound
I don't see any problem with returning null, but throwing an exception seems like the most sensible alternative. Your best bet would be to create your own, custom Exception class.
The code should look just about the same either way:
try {
SearchResult someResult = searchForStuff();
}
catch ( ResultNotFoundException rnfe ) {
/* do stuff */
}
/* almost the same as this */
SearchResult someResult = searchForStuff();
if ( someResult == null ) {
/* do stuff */
}

Resources