option to disable generating internal fields in stuct using google protobuf implementation for golang - protocol-buffers

We are migrating our code base from gogo/protobuf to second version of google.golang.org/protobuf.
We are running into several issues because the new structs have some internal fields namely:-
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
As a result of these proto.equal doesnt work anymore on these structs and we cannot compare maps and slices.
More importantly we cannot pass these structures as value since it complains with below error:-
copies lock value: waf_rules.GlobalSpecType contains google.golang.org/protobuf/internal/impl.MessageState contains sync.Mutex
Is there any option to disable generating these internal fields for the structs
Any help would be appreciated. Thanks

Related

How to make a copy slice of struct

I have a slice of struct. I am trying to copy this slice to new variable since my original slice changes a lot
model for sheet
type Timesheet struct {
ID *int64 `json:"id"`
TimestampStart *time.Time `json:"timestampStart"`
TimestampEnd *time.Time `json:"timestampEnd"`
}
SheetArrayCopy := make([]models.Sheet, len(sheetList))
copy(SheetArrayCopy, SheetList)
//several steps which goes through SheetList and changes the value of sheetList
However when I change the value in sheetList, the values of SheetArrayCopy also changes.
From your question and #Masklinn example link. I can see that you set value by using *pointer (address that pointer points to) which means set the value of that address to the new value.
There is nothing to do with
copy
Which is doing exactly which it means. And in this case, it clone the value of the fields' pointers which is pointing to the address of the fields' values.
The problem is the way you use and set value of the pointers.
There are 3 ways, to avoid the problem you mentioned about.
Write your custom clone slice which init new structs and clone only the values from original slice to the new.
Still using the clone, but when you set the value of the fields, set the fields' pointers to the new address. Others slices' items pointers still point to the old value.
Don't use pointer if you don't have any special reason.
You can ref to my code which is the demonstration of my answer.
https://play.golang.org/p/-pIgEDEr-hI
Link about the pointer which points out directly about how to use pointer.
https://tour.golang.org/moretypes/1
maybe you can covert that back to json and unmarsal that json to its new destination.

How to access unconverted driver.Value slice of sql.Rows

My goal is to get to the raw driver.Value values as deserialized by a sql driver in its implementation of driver.Rows.Next(). I want to handle the conversion from the values returned by the driver to the needed target types, instead of relying on the automatic conversions built in to Rows.Scan. Note this question does not ask your opinion on whether Rows.Scan "should" be used. I don't want to use it, and I am asking if there is any way to avoid it.
A meaningful answer does not use Rows.Scan at all. The dynamic approach illustrated in Working with Unknown Columns is awful: It invokes all the overhead of Scan and destroys the type information of the source columns, instead shredding the actual driver.Values into SqlBytes.
The following hack works, but relies on the internal implementation detail that sql.Rows.Next() populates the internal field lastcols with exactly the unconverted values which I want:
vpRows := reflect.ValueOf(rows) // rows is a *sql.Rows
vRows := reflect.Indirect(vpRows) // now we have the sql.Rows struct
mem := vRows.FieldByName("lastcols") // unexported field lastcols
unsafeLastCols := unsafe.Pointer(mem.UnsafeAddr()) // Evil
plastCols := (*[]driver.Value)(unsafeLastCols) // But effective
for rows.Next() {
rowVals := *plastCols
fmt.Println(rowVals)
}
The normal solution is to implement your own sql.Scanner. But this does use rows.Scan, so it violates your mysterious requirement not to use rows.Scan.
If you truly must avoid rows.Scan, you'll need to write your own driver implementation (possibly wrapping an existing driver) which provides access to the driver.Value values without rows.Scan.

Subquery, for lack of a better term, when using an API written in GraphQL

I'm relatively new to GraphQL so please bear with me ...
That said, I'm writing an app in node.js to push/pull data from two disparate systems, one of which has an API written in GraphQL.
For the Graph system, I have, something like, the following types defined for me:
Time {
TimeId: Int
TaskId: Int
ProjectId: Int
Project: [Project]
TimeInSeconds: Int
Timestamp: Date
}
and
Task {
TaskId: Int
TaskName: String
TaskDescription: String
}
Where Project is another type whose definition isn't important, only that it is included in the type definition as a field...
What I would like to know is if there is a way to write a query for Time in such a way that I can include the Task type's values in my results in a similar way as the values for the Project type are included in the definition?
I am using someone else's API and do not have the ability to define my own custom types. I can write my own limited queries, but I don't know if the limits are set by the devs that wrote the API or my limited ability with GraphQL.
My suspicion is that I cannot and that I will have to query both separately and combine them after the fact, but I wanted to check here just in case.
Unfortunately, unless the Time type exposes some kind of field to fetch the relevant Task, you won't be able to query for it within the same request. You can include multiple queries within a single GraphQL request; however, they are ran in parallel, which means you won't be able to use the TaskId value returned by one query as a variable used in another query.
This sort of problem is best solved by modifying the schema, but if that's not an option then unfortunately the only other option is to make each request sequentially and then combine the results client-side.

Latency using struct bio

I want to draw latency information for each struct bio that passes through the block layer. I have a module that overrides make_request_fn. I would want to find out how long did that bio took from there to reach request queue and from there to driver and so on.
I tried to attach a custom struct to the bio I receive at make_request_fn but since I did not create those, I cant use the bi_private field. Is there any way to work around this?
One option I have is to make a bio wrapper structure and copy bio structs into it before passing it to the lower functions so that I could use container_of to record times.
I have read about tools like blktrace and btt but I need that information inside my module. Is there any way to achieve this?
Thank you.
The solution I used seemed like a common workaround once I found something similar in the source of drbd block driver. The bi_private field can be used only by the function that allocates it. So I used bio_clone in the following way
bio_copy = bio_clone(bio_source, GFP_NOIO);
struct something *instance = kmalloc(sizeof(struct something), GFP_KERNEL);
instance->bio_original = bio_source;
//update timestamps for latency inside this struct instance
bio_copy->bi_private = instance;
bio_copy->bi_end_io = my_end_io_function;
bio_copy->bi_dev = bio_source->bi_dev;
...
...
make_request_fn(queue, bio_copy);
You'll have to write a bi_end_io function. Do remember to call bio_endio for original bio inside this function. You might need to copy bi_error field into bio_source's bi_error before calling bio_endio(bio_source).
Hope this helps someone.

Finding enum type in a protobuffer

Good evening all,
Writing an application in IronPython that will act as a message spoofer for a system that has not been developed far enough to test for our system. Part of the application is a set of tables that show values for messages and commands. In the case of commands there are some fields of our commands that have enum values. The command table is to have a drop-down box with those enum options in it.
My approach is to create a DataSet for each of our messages. The DataSet has a DataTable that had the message fields in it and the message values. It also has a table for each enum type in the message. So, the following code is what I use to figure out if the field is a normal field or an enum field.
msg = mpas.M120()
msg_fields = msg.DESCRIPTOR.fields
for field in msg_fields:
fieldEnumType = msg.DESCRIPTOR.fields_by_name[field.name].enum_type
print("{} --> EnumType: {}".format(field.name, fieldEnumType.name if fieldEnumType != None else 'None'))
I have also found that this works for me as well:
msg = mpas.M120()
msg_fields = msg.DESCRIPTOR.fields
for k,v in msg.DESCRIPTOR.fields_by_name.items():
print ("{} --> {}".format(k, ((v.enum_type).name if v.enum_type != None else 'None')))
What I will get from this is the name of the enum for each of the enum fields. I now want to be able to get a list of all of the values for each of the enum fields found. Here is the trick, enums that are used by a certain message and only that message are defined at the message level (i.e. mpas.M120().. Enums that are used by other messages are defined at the top level (i.e. mpas..
So, how would I go about finding the values for these enums so I can populate my drop-down boxes? I have been working on this for the better part of a day now and I cannot figure it out.
Thanks in advance...
You've already found v.enum_type, which is the EnumDescriptor corresponding to the field's enum type. You are getting name from here, but this object also contains a list of values. See the documentation here:
https://developers.google.com/protocol-buffers/docs/reference/python/google.protobuf.descriptor.EnumDescriptor-class

Resources