How could I reuse protobuf memory without reallocating it - protocol-buffers

I have a protobuf message like this
message ImgReply {
bytes data = 1;
}
And I want to assign its contents with set_allocated method:
string *buf = new string();
GRPC_CALL_BACK_FUNCTION() {
.....
reply->set_allocated_data(buf);
return Status::OK;
}
Now each time the grpc function is call, the buf will be released automatically. I would like to reuse it such that I do not need to reallocate memory each time. I tried to call the reply->release_data(); method will just clear the data field and the client would receive no data at all. So how could I reuse this buf variable and do not let protobuf delete it automatically please ?

The gRPC C++ sync API doesn't provide any feature for custom memory allocation. The callback API has been planned with a message allocator feature, but that hasn't been de-experimentalized yet, so it isn't ready to use publicly. That should be available within the next month or two.

Related

How can I return an array of integers from Solana rust program function to front end?

I don't know how can I write a smart contract in Solana that after executing the logic, returns an array of integers, strings, ... to the client, and how can I fetch it using Web3?
There's a syscall available to on-chain programs called set_return_data, which puts data into a buffer that can be read by the higher-level programs using get_return_data. This all mediated through opaque byte buffers, so you'll need to know how to decode the response.
If you want to fetch the data from the client side, you can simulate the transaction and read the data back from the return_data field in the response: https://edge.docs.solana.com/developing/clients/jsonrpc-api#results-50
The RPC support in simulated transactions is very new in version 1.11, but the return data is available in earlier versions.
Source code for set_return_data at https://github.com/solana-labs/solana/blob/658752cda710cb358d7ccbbc2cee06bf8009c2d4/sdk/program/src/program.rs#L102
Source code for get_return_data at https://github.com/solana-labs/solana/blob/658752cda710cb358d7ccbbc2cee06bf8009c2d4/sdk/program/src/program.rs#L117
So, programs do not return data (other than success or failure).
However; most programs write data to a program owned account's data field and this could be read from client apps (Rust, Python, TS/JS, etc.).
If using the Solana web3 library, you can call getAccountInfo on the Connection object. This will return the byte array of the account. You will then need to deserialize that data. You have to know how the program serializes the data to reverse it successfully.
Check the Solana Cookbook for overview using borsh https://solanacookbook.com/guides/serialization.html#how-to-deserialize-account-data-on-the-client

When making a go RPC call , the return type is channel

Firstly, here is a PRC server. Please notice one of the return type is chan:
func (c *Coordinator) FetchTask() (*chan string, error) {
// ...
return &reply, nil
}
Then the client makes a RPC call. Typically the caller will get a channel which type is *chan string.
call("Coordinator.FecthTask", &args, &reply)
Here is my question. If the server continuously write into the channel:
for i := 0; i < 100; i++ {
reply<- strconv.Itoa(i)
}
🎈🎈🎈 Can the client continuously read read from the channel?
for {
var s string = <-reply
}
I guess the client can't, cuz server and client are not in the same memory. They communicate via Internet. Therefore, even the variable reply is a pointer, it points different address in server and client.
I'am not sure about it. What do you think of it? Thanks a lot!!!!
🎈🎈 BTW, is there anyway to implement a REAL, stateful channel between server and client?
As you already mentioned, channels are in memory variables and it is not possible to use them in other apps or systems. In the other hand gRPC will pass and parse binary data which in this case again passing a channel pointer, will only returns the pointer address in server's memory. After client receiving that address it will try to point to that address in local machine's memory which will can be any sort of data unfortunately.
If you want to push a group of data (let's say an array of strings) you can use a Server streaming or Bidirectional streaming.
In the other hand if you want to accomplish some sort of stable and keep-alive connection you can consider websockets too.

C++ libcurl Caching Data Response

I am using curl lib and easy_curl extension for this.
I am going to leave out code for now as i do not think it is necessary for explaining the issue. I am using c++ and curl lib in order to retrieve a serialized google prototype from a server. The serialized protobuff contains an integer and a static array of objects. The compiled protobuff looks like
typedef struct _ExperimentRunner_ExperimentList_RES {
int32_t pollFrequency;
pb_size_t activeExperiments_count;
ExperimentRunner_ExperimentInfo activeExperiments[5];
/* ##protoc_insertion_point(struct:ExperimentRunner_ExperimentList_RES) */
} ExperimentRunner_ExperimentList_RES;
When tested everything works fine and the protobuff is retrieved from the server and parsed correctly. The get request is for data not a file from the server.
The code is setup in such a way that the experiment list is retrieved every poll frequency. The issue is the following scenario
app start and retrieves the experiment list, which currently
has one entry
remove the entry from the server database wait for the
app to re-poll the server
The app sees the server response still containing the entry that was removed. I confirm its removed by doing a curl from the command line
There seems to be an issue with the curl library caching the data result from the server, and then returning it when I make a request. since when I restart the application it gets the correct data. I have implemented CURLOPT_DEBUGFUNCTION and see the old data being returned by the request when i know the server has it deleted from the database. Any suggestions what options or caching might be going on to cause this?
This ended up being a dump mistake by me caused by miss-understanding how curl lib works (i think). I was using CURLOPT_WRITEFUNCTION to capture the data in a char array. I was not fully clearing this buff inbetween requests as I was assuming that curllib would terminate the new received data with a "\0", invalidating the old data. but I think this assumption is untrue. Once i cleared the entire buffer before the next req everything worked great. bellow is the data capture function for incase
size_t CURL_RECIEVE_DATA_BUFF(void *buffer, size_t size, size_t nmemb, void *userp)
{
CURL_DATA_BUFF* curlData = (CURL_DATA_BUFF*)userp;
if (curlData)
{
if (curlData->amountWriten >= curlData->maxSize) {
LogIt.Add(Error, "%s:%s Server sending more data than expected, max is: %d bytes\n", __FILE__, __FUNCTION__, curlData->maxSize);
return 0;
}
memcpy(&(curlData->buff[curlData->amountWriten]), buffer, size * nmemb);
curlData->amountWriten += size * nmemb;
return size * nmemb;
}
return 0;
}

Biztalk Debatched Message Value Caching

I get a file with 4000 entries and debatch it, so i dont lose the whole message if one entry has corrupting data.
The Biztalkmap is accessing an SQL server, before i debatched the Message I simply cached the SLQ data in the Map, but now i have 4000 indipendent maps.
Without caching the process takes about 30 times longer.
Is there a way to cache the data from the SQL Server somewhere out of the Map without losing much Performance?
It is not a recommendable pattern to access a database in a Map.
Since what you describe sounds like you're retrieving static reference data, another option is to move the process to an Orchestration where the reference data is retrieved one time into a Message.
Then, you can use a dual input Map supplying the reference data and the business message.
In this patter, you can either debatch in the Orchestration or use a Sequential Convoy.
I would always avoid accessing SQL Server in a map - it gets very easy to inadvertently make many more calls than you intend (whether because of a mistake in the map design or because of unexpected volume or usage of the map on a particular port or set of ports). In fact, I would generally avoid making any kind of call in a map that has to access another system or service, but if you must, then caching can help.
You can cache using, for example, MemoryCache. The pattern I use with that generally involves a custom C# library where you first check the cache for your value, and if there's a miss you check SQL (either for the paritcular entry or the entire cache, e.g.:
object _syncRoot = new object();
...
public string CheckCache(string key)
{
string check = MemoryCache.Default.Get(key) as string;
if (check == null)
{
lock (_syncRoot)
{
// make sure someone else didn't get here before we acquired the lock, avoid duplicate work
check = MemoryCache.Default.Get(key) as string;
if (check != null) return check;
string sql = #"SELECT ...";
using (SqlConnection conn = new SqlConnection(connStr))
{
conn.Open();
using (SqlCommand cmd = conn.CreateCommand())
{
cmd.CommandText = sql;
cmd.Parameters.AddWithValue(...);
// ExecuteScalar or ExecuteReader as appropriate, read values out, store in cache
// use MemoryCache.Default.Add with sensible expiration to cache your data
}
}
}
}
else
{
return check;
}
}
A few things to keep in mind:
This will work on a per AppDomain basis, and pipelines and orchestrations run on separate app domains. If you are executing this map in both places, you'll end up with caches in both places. The complexity added in trying to share this accross AppDomains is probably not worth it, but if you really need that you should isolate your caching into something like a WCF NetTcp service.
This will use more memory - you shouldn't just throw everything and anything into a cache in BizTalk, and if you're going to cache stuff make sure you have lots of available memory on the machine and that BizTalk is configured to be able to use it.
The MemoryCache can store whatever you want - I'm using strings here, but it could be other primitive types or objects as well.

Two-way-binding for golang structs

TLDR: Can I register callback functions in golang to get notified if a struct member is changed?
I would like to create a simple two-way-binding between a go server and an angular client. The communication is done via websockets.
Example:
Go:
type SharedType struct {
A int
B string
}
sharedType := &SharedType{}
...
sharedType.A = 52
JavaScript:
var sharedType = {A: 0, B: ""};
...
sharedType.A = 52;
Idea:
In both cases, after modifying the values, I want to trigger a custom callback function, send a message via the websocket, and update the value on the client/server side accordingly.
The sent message should only state which value changed (the key / index) and what the new value is. It should also support nested types (structs, that contain other structs) without the need of transmitting everything.
On the client side (angular), I can detect changes of JavaScript objects by registering a callback function.
On the server side (golang), I could create my own map[] and slice[] implementations to trigger callbacks everytime a member is modified (see the Cabinet class in this example: https://appliedgo.net/generics/).
Within these callback-functions, I could then send the modified data to the other side, so two-way binding would be possible for maps and slices.
My Question:
I would like to avoid things like
sharedType.A = 52
sharedType.MemberChanged("A")
// or:
sharedType.Set("A", 52) //.. which is equivalent to map[], just with a predifined set of allowed keys
Is there any way in golang to get informed if a struct member is modified? Or is there any other, generic way for easy two-way binding without huge amounts of boiler-plate code?
No, it's not possible.
But the real question is: how do you suppose to wield all such magic in your Go program?
Consider what you'd like to have would be indeed possible.
Now an innocent assignment
v.A = 42
would—among other things—trigger sending stuff
over a websocket connection to the client.
Now what happens if the connection is closed (client disconnected),
and the sending fails?
What happens if sending fails to complete before a deadline is reached?
OK, suppose you get it at least partially right and actual modification of the local field happens only if sending succeeds.
Still, how should sending errors be handled?
Say, what should happen if the third assignment in
v.A = 42
v.B = "foo"
v.C = 1e10-23
fails?
you could try using server sent events (SSE) to send realtime data to the frontend, while sending a single post request with ur changes. That way you can monitor in the back and send data every second.

Resources