release event.data.ptr when epoll_ctl(del) a fd - events

I use event.data.ptr to point to an object.
When I epoll_ctl(del) the event I have to delete the object(pointed by event.data.ptr).
How can I get the pointer?
epll_ctl(del) just ignores the parameter 'event':
int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
Do I have to maintain a vector of these objects? I think its a little dirty.
thanks.

it is in fact. There is no way to query the data field in the epoll interface. The common workaround is to keep that data in your own list in user-space.
I think its time someone write an epoll syscall to query that field to avoid duplicated data associated to a file descriptor.

Related

How to make a copy slice of struct

I have a slice of struct. I am trying to copy this slice to new variable since my original slice changes a lot
model for sheet
type Timesheet struct {
ID *int64 `json:"id"`
TimestampStart *time.Time `json:"timestampStart"`
TimestampEnd *time.Time `json:"timestampEnd"`
}
SheetArrayCopy := make([]models.Sheet, len(sheetList))
copy(SheetArrayCopy, SheetList)
//several steps which goes through SheetList and changes the value of sheetList
However when I change the value in sheetList, the values of SheetArrayCopy also changes.
From your question and #Masklinn example link. I can see that you set value by using *pointer (address that pointer points to) which means set the value of that address to the new value.
There is nothing to do with
copy
Which is doing exactly which it means. And in this case, it clone the value of the fields' pointers which is pointing to the address of the fields' values.
The problem is the way you use and set value of the pointers.
There are 3 ways, to avoid the problem you mentioned about.
Write your custom clone slice which init new structs and clone only the values from original slice to the new.
Still using the clone, but when you set the value of the fields, set the fields' pointers to the new address. Others slices' items pointers still point to the old value.
Don't use pointer if you don't have any special reason.
You can ref to my code which is the demonstration of my answer.
https://play.golang.org/p/-pIgEDEr-hI
Link about the pointer which points out directly about how to use pointer.
https://tour.golang.org/moretypes/1
maybe you can covert that back to json and unmarsal that json to its new destination.

How to access unconverted driver.Value slice of sql.Rows

My goal is to get to the raw driver.Value values as deserialized by a sql driver in its implementation of driver.Rows.Next(). I want to handle the conversion from the values returned by the driver to the needed target types, instead of relying on the automatic conversions built in to Rows.Scan. Note this question does not ask your opinion on whether Rows.Scan "should" be used. I don't want to use it, and I am asking if there is any way to avoid it.
A meaningful answer does not use Rows.Scan at all. The dynamic approach illustrated in Working with Unknown Columns is awful: It invokes all the overhead of Scan and destroys the type information of the source columns, instead shredding the actual driver.Values into SqlBytes.
The following hack works, but relies on the internal implementation detail that sql.Rows.Next() populates the internal field lastcols with exactly the unconverted values which I want:
vpRows := reflect.ValueOf(rows) // rows is a *sql.Rows
vRows := reflect.Indirect(vpRows) // now we have the sql.Rows struct
mem := vRows.FieldByName("lastcols") // unexported field lastcols
unsafeLastCols := unsafe.Pointer(mem.UnsafeAddr()) // Evil
plastCols := (*[]driver.Value)(unsafeLastCols) // But effective
for rows.Next() {
rowVals := *plastCols
fmt.Println(rowVals)
}
The normal solution is to implement your own sql.Scanner. But this does use rows.Scan, so it violates your mysterious requirement not to use rows.Scan.
If you truly must avoid rows.Scan, you'll need to write your own driver implementation (possibly wrapping an existing driver) which provides access to the driver.Value values without rows.Scan.

Why i2c_device_id is defined separately and not part of i2c_client

Why do we have a separate structure called i2c_device_id when we had i2c_client already?
The following is the definition of i2c_device_id :
struct i2c_device_id {
char name[I2C_NAME_SIZE];
kernel_ulong_t driver_data; /* Data private to the driver */
};
We already have char name[I2C_NAME_SIZE]; in i2c_client. We could also have stored driver_data inside i2c_client. Why was there a need to define a separate structure called as i2c_device_id. I feel redundant. Can someone please help me what I am missing out as I believe I do.
EDIT
It seems like its moreover related to the history. Please correct me if I am wrong.
i2c_device_id is there from a long time. Then came of_device_id and acpi_device_id. Using either of them should be enough, however, they are not mututally exclusive and can be mixed. This is the reason if a device tree is being used and the driver also has i2c_device_id table defined, the probe also receives i2c_device_id of the matching device. I tried this on a sensor to validate.
However, to me they should be mutually exclusive. The platform_match looks like this : https://elixir.bootlin.com/linux/latest/source/drivers/base/platform.c#L965
So if a device tree based match succeeded (OF), then as per definition, it returns from there, without proceeding for i2c_device_id table match. Hence, the driver probe should receive NULL for i2c_device_id. But this is not the case. Can someone please help to find why I am still getting i2c_device_id in the driver probe, when the device tree match (OF) is happening.

Latency using struct bio

I want to draw latency information for each struct bio that passes through the block layer. I have a module that overrides make_request_fn. I would want to find out how long did that bio took from there to reach request queue and from there to driver and so on.
I tried to attach a custom struct to the bio I receive at make_request_fn but since I did not create those, I cant use the bi_private field. Is there any way to work around this?
One option I have is to make a bio wrapper structure and copy bio structs into it before passing it to the lower functions so that I could use container_of to record times.
I have read about tools like blktrace and btt but I need that information inside my module. Is there any way to achieve this?
Thank you.
The solution I used seemed like a common workaround once I found something similar in the source of drbd block driver. The bi_private field can be used only by the function that allocates it. So I used bio_clone in the following way
bio_copy = bio_clone(bio_source, GFP_NOIO);
struct something *instance = kmalloc(sizeof(struct something), GFP_KERNEL);
instance->bio_original = bio_source;
//update timestamps for latency inside this struct instance
bio_copy->bi_private = instance;
bio_copy->bi_end_io = my_end_io_function;
bio_copy->bi_dev = bio_source->bi_dev;
...
...
make_request_fn(queue, bio_copy);
You'll have to write a bi_end_io function. Do remember to call bio_endio for original bio inside this function. You might need to copy bi_error field into bio_source's bi_error before calling bio_endio(bio_source).
Hope this helps someone.

How to get the memory size of an user generated thrift object in Go

I am new to Go, and trying to get the estimate size of a Thrift generated object in Go, the object has multiple level, and in the case of an member variable is an collection of pointers, I want to get the total size of pointer and the data that been pointed to. The sizeOf function won't work in this case, as it will only count the space taken by the pointer, not the actual space. What's the best way to get a good estimate of the size of the object? Ideally, I would like to breakdown the size into different fields and subfields
Here is how an example object looks like in Go
type Microshard struct {
Header *Header `thrift:"header,1,required"`
ValidatorStats []*ValidatorStat `thrift:"validatorStats,2,required"`
DataMap1 map[int64][]byte `thrift:"dataMap1,3,required"`
DataMap2 map[int64][]byte `thrift:"dataMap2,4,required"`
DataMap3 map[int64][]int64 `thrift:"dataMap3,5,required"`
DebugInfoMap map[string][]byte `thrift:"debugInfoMap,6,required"`
Index *IndexData `thrift:"index,7"`
}

Resources