append's return value in Go - go

After reading this article, I have some question in mind.
Basically, why we need to store the return value of append() in Go? How is the function actually implemented?
I have tried to replicate (sort of) the mechanism of append in C (which is the first language used to implements the Go language, if I'm not mistaken). I used malloc(), instead of an array as it will not deallocate the slice after the function returns.
Here is my code:
#include <stdio.h>
#include <stdlib.h>
typedef struct SliceHeader {
int length;
int capacity;
int *zerothElement;
} SliceHeader;
void append(SliceHeader *sh, int element)
{
if (sh->length == sh->capacity) {
// grow capacity size
sh->capacity += 10;
realloc(sh->zerothElement, sh->capacity);
}
sh->zerothElement[sh->length] = element;
sh->length++;
}
SliceHeader * make(int capacity)
{
SliceHeader *sh = (SliceHeader *) malloc(sizeof(sh));
sh->length = 0;
sh->capacity = capacity;
sh->zerothElement = (int *) malloc(capacity * sizeof(int));
return sh;
}
int main()
{
SliceHeader *sh = make(3);
append(sh, 5);
append(sh, 10);
append(sh, 15);
append(sh, 20); // exceed the original capacity, should reallocate
for (int i = 0; i < sh->length; i++) {
printf("%d\n", *((sh->zerothElement)+i) );
}
free(sh->zerothElement);
free(sh);
return 0;
}
(I omit NULLs checking to show only the relevant part to the main question).
If I'm using this code, I can use append() without the need to store its return value and no needs to create a new slice header.
So how is the implementation of append() function in Golang that makes it needs to store a new slice header? Even if the zerothElement uses an array, doesn't it means that it will need to change the array only instead of the whole slice header?
What am I missing here?
Thanks :)

Basically, why we need to store the return value of append() in Go?
You only need to store this value if you intend to use the slice with the appended value.
How is the function actually implemented?
Go is open source, just consult the source code. (Btw: This is uninteresting.)

Related

Is there any atomicMul() in cuda? [duplicate]

There is atomicAdd and atomicSub but it seems that atomicMul and atomicDiv don't exist! Is it possible? I need to implement the following code:
atomicMul(&accumulation[index],value)
How Can I do?
Ok, I solved. But I cannot understand how atomicMul works and I don't know how to write it for floats.
#include <stdio.h>
#include <cuda_runtime.h>
__device__ double atomicMul(double* address, double val)
{
unsigned long long int* address_as_ull = (unsigned long long int*)address;
unsigned long long int old = *address_as_ull, assumed;
do {
assumed = old;
old = atomicCAS(address_as_ull, assumed, __double_as_longlong(val * __longlong_as_double(assumed)));
} while (assumed != old); return __longlong_as_double(old);
}
__global__ void try_atomicMul(double* d_a, double* d_out)
{
atomicMul(d_out,d_a[threadIdx.x]);
}
int main()
{
double h_a[]={5,6,7,8}, h_out=1;
double *d_a, *d_out;
cudaMalloc((void **)&d_a, 4 * sizeof(double));
cudaMalloc((void **)&d_out,sizeof(double));
cudaMemcpy(d_a, h_a, 4 * sizeof(double),cudaMemcpyHostToDevice);
cudaMemcpy(d_out, &h_out, sizeof(double),cudaMemcpyHostToDevice);
dim3 blockDim(4);
dim3 gridDim(1);
try_atomicMul<<<gridDim, blockDim>>>(d_a,d_out);
cudaMemcpy(&h_out, d_out, sizeof(double), cudaMemcpyDeviceToHost);
printf("%f \n",h_out);
cudaFree(d_a);
return 0;
}
I'll supplement horus' answer based on what I understood about atomicCAS. My answer can be wrong in detail, because I didn't look inside the atomicCAS function but just read the documents about it (atomicCAS, Atomic Functions). Feel free to tackle my answer.
How atomicMul works
According to my understanding, the behavior of atomicCAS(int* address, int compare, int val) is following.
Copy *address into old (i.e old = *address)
Store (old == compare ? val : old) to *address. (At this point, the value of old and *address can be different depending on if the condition matched or not.)
Return old
Understanding about its behavior gets better when we look at the atomicMul function's definition together.
unsigned long long int* address_as_ull = (unsigned long long int*)address;
unsigned long long int oldValue = *address_as_ull, assumed; // Modified the name 'old' to 'oldValue' because it can be confused with 'old' inside the atomicCAS.
do {
assumed = oldValue;
// other threads can access and modify value of *address_as_ull between upper and lower line.
oldValue = atomicCAS(address_as_ull, assumed, __double_as_longlong(val *
__longlong_as_double(assumed)));
} while (assumed != oldValue); return __longlong_as_double(oldValue);
What we want to do is read the value from address(its value is eqaul to address_as_ull), and multiply some value to it and then write it back. The problem is other threads can access and modify value of *address between read, modify, and write.
To ensure there was no intercept of other threads, we check if the value of *address is equal to what we assumed to be there. Say that other thread modified value of *address after assumed=oldValue and oldValue = atomicCAS(...). The modified value of *address will be copied to old variable inside the atomicCAS(see behavior 1. of atomicCAS above).
Since atomicCAS updates *address according to *address = (old == compare ? val : old), *address won't be changed (old==*address).
Then atomicCAS returns old and it goes into oldValue so that the loop can keep going and we can try another shot at next iteration. When *addressis not modified between read and write, then val is written to the *address and loop will end.
How to write it for float
short answer :
__device__ float atomicMul(float* address, float val)
{
int* address_as_int = (int*)address;
int old = *address_as_int, assumed;
do {
assumed = old;
old = atomicCAS(address_as_int, assumed, __float_as_int(val *
__float_as_int(assumed)));
} while (assumed != old); return __int_as_float(old);
}
I didn't test it, so there can be some errors. Fix me if I'm wrong.
How does it work :
For some reason, atomicCAS only supports integer types. So we should manually convert float/double type variable into integer type to input to the function and then re-convert the integer result to float/double type. What I've modified above is double to float and unsigned long long to int because the size of float matches to int.
Kyungsu's answer was almost correct. On the line defining old == atomicCAS(...) though, he used __float_as_int when he should have used __int_as_float. I corrected his code below:
__device__ float atomicMul(float* address, float val){
//Implementation of atomic multiplication
//See https://stackoverflow.com/questions/43354798/atomic-multiplication-and-division
int* address_as_int = (int*)address;
int old = *address_as_int;
int assumed;
do {
assumed = old;
old = atomicCAS(address_as_int, assumed, __float_as_int(val * __int_as_float(assumed)));
} while (assumed != old);
return __int_as_float(old);}

Pointer not printing char[] array

I'm writing some code to take in a string, turn it into a char array and then print back to the user (before passing to another function).
Currently the code works up to dat.toCharArray(DatTim,datsize); however, the pointer does not seem to be working as the wile loop never fires
String input = "Test String for Foo";
InputParse(input);
void InputParse (String dat)
//Write Data
datsize = dat.length()+1;
const char DatTim[datsize];
dat.toCharArray(DatTim,datsize);
//Debug print back
for(int i=0;i<datsize;i++)
{
Serial.write(DatTim[i]);
}
Serial.println();
//Debug pointer print back
const char *b;
b=*DatTim;
while (*b)
{
Serial.print(*b);
b++;
}
Foo(*DatTim);
I can't figure out the difference between what I have above vs the template code provided by Majenko
void PrintString(const char *str)
{
const char *p;
p = str;
while (*p)
{
Serial.print(*p);
p++;
}
}
The expression *DatTim is the same as DatTim[0], i.e. it gets the first character in the array and then assigns it to the pointer b (something the compiler should have warned you about).
Arrays naturally decays to pointers to their first element, that is DatTim is equal to &DatTim[0].
The simple solution is to simply do
const char *b = DatTim;

node.js c++ addon - afraid of memory leak

first of all I admit I'm a newbie in C++ addons for node.js.
I'm writing my first addon and I reached a good result: the addon does what I want. I copied from various examples I found in internet to exchange complex data between the two languages, but I understood almost nothing of what I wrote.
The first thing scaring me is that I wrote nothing that seems to free some memory; another thing which is seriously worrying me is that I don't know if what I wrote may helps or creating confusion for the V8 garbage collector; by the way I don't know if there are better ways to do what I did (iterating over js Object keys in C++, creating js Objects in C++, creating Strings in C++ to be used as properties of js Objects and what else wrong you can find in my code).
So, before going on with my job writing the real math of my addon, I would like to share with the community the nan and V8 part of it to ask if you see something wrong or that can be done in a better way.
Thank you everybody for your help,
iCC
#include <map>
#include <nan.h>
using v8::Array;
using v8::Function;
using v8::FunctionTemplate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::Value;
using v8::String;
using Nan::AsyncQueueWorker;
using Nan::AsyncWorker;
using Nan::Callback;
using Nan::GetFunction;
using Nan::HandleScope;
using Nan::New;
using Nan::Null;
using Nan::Set;
using Nan::To;
using namespace std;
class Data {
public:
int dt1;
int dt2;
int dt3;
int dt4;
};
class Result {
public:
int x1;
int x2;
};
class Stats {
public:
int stat1;
int stat2;
};
typedef map<int, Data> DataSet;
typedef map<int, DataSet> DataMap;
typedef map<float, Result> ResultSet;
typedef map<int, ResultSet> ResultMap;
class MyAddOn: public AsyncWorker {
private:
DataMap *datas;
ResultMap results;
Stats stats;
public:
MyAddOn(Callback *callback, DataMap *set): AsyncWorker(callback), datas(set) {}
~MyAddOn() { delete datas; }
void Execute () {
for(DataMap::iterator i = datas->begin(); i != datas->end(); ++i) {
int res = i->first;
DataSet *datas = &i->second;
for(DataSet::iterator l = datas->begin(); l != datas->end(); ++l) {
int dt4 = l->first;
Data *data = &l->second;
// TODO: real population of stats and result
}
// test result population
results[res][res].x1 = res;
results[res][res].x2 = res;
}
// test stats population
stats.stat1 = 23;
stats.stat2 = 42;
}
void HandleOKCallback () {
Local<Object> obj;
Local<Object> res = New<Object>();
Local<Array> rslt = New<Array>();
Local<Object> sts = New<Object>();
Local<String> x1K = New<String>("x1").ToLocalChecked();
Local<String> x2K = New<String>("x2").ToLocalChecked();
uint32_t idx = 0;
for(ResultMap::iterator i = results.begin(); i != results.end(); ++i) {
ResultSet *set = &i->second;
for(ResultSet::iterator l = set->begin(); l != set->end(); ++l) {
Result *result = &l->second;
// is it ok to declare obj just once outside the cycles?
obj = New<Object>();
// is it ok to use same x1K and x2K instances for all objects?
Set(obj, x1K, New<Number>(result->x1));
Set(obj, x2K, New<Number>(result->x2));
Set(rslt, idx++, obj);
}
}
Set(sts, New<String>("stat1").ToLocalChecked(), New<Number>(stats.stat1));
Set(sts, New<String>("stat2").ToLocalChecked(), New<Number>(stats.stat2));
Set(res, New<String>("result").ToLocalChecked(), rslt);
Set(res, New<String>("stats" ).ToLocalChecked(), sts);
Local<Value> argv[] = { Null(), res };
callback->Call(2, argv);
}
};
NAN_METHOD(AddOn) {
Local<Object> datas = info[0].As<Object>();
Callback *callback = new Callback(info[1].As<Function>());
Local<Array> props = datas->GetOwnPropertyNames();
Local<String> dt1K = Nan::New("dt1").ToLocalChecked();
Local<String> dt2K = Nan::New("dt2").ToLocalChecked();
Local<String> dt3K = Nan::New("dt3").ToLocalChecked();
Local<Array> props2;
Local<Value> key;
Local<Object> value;
Local<Object> data;
DataMap *set = new DataMap();
int res;
int dt4;
DataSet *dts;
Data *dt;
for(uint32_t i = 0; i < props->Length(); i++) {
// is it ok to declare key, value, props2 and res just once outside the cycle?
key = props->Get(i);
value = datas->Get(key)->ToObject();
props2 = value->GetOwnPropertyNames();
res = To<int>(key).FromJust();
dts = &((*set)[res]);
for(uint32_t l = 0; l < props2->Length(); l++) {
// is it ok to declare key, data and dt4 just once outside the cycles?
key = props2->Get(l);
data = value->Get(key)->ToObject();
dt4 = To<int>(key).FromJust();
dt = &((*dts)[dt4]);
int dt1 = To<int>(data->Get(dt1K)).FromJust();
int dt2 = To<int>(data->Get(dt2K)).FromJust();
int dt3 = To<int>(data->Get(dt3K)).FromJust();
dt->dt1 = dt1;
dt->dt2 = dt2;
dt->dt3 = dt3;
dt->dt4 = dt4;
}
}
AsyncQueueWorker(new MyAddOn(callback, set));
}
NAN_MODULE_INIT(Init) {
Set(target, New<String>("myaddon").ToLocalChecked(), GetFunction(New<FunctionTemplate>(AddOn)).ToLocalChecked());
}
NODE_MODULE(myaddon, Init)
One year and half later...
If somebody is interested, my server is up and running since my question and the amount of memory it requires is stable.
I can't say if the code I wrote really does not has some memory leak or if lost memory is freed at each thread execution end, but if you are afraid as I was, I can say that using same structure and calls does not cause any real problem.
You do actually free up some of the memory you use, with the line of code:
~MyAddOn() { delete datas; }
In essence, C++ memory management boils down to always calling delete for every object created by new. There are also many additional architecture-specific and legacy 'C' memory management functions, but it is not strictly necessary to use these when you do not require the performance benefits.
As an example of what could potentially be a memory leak: You're passing the object held in the *callback pointer to the function AsyncQueueWorker. Yet nowhere in your code is this pointer freed, so unless the Queue worker frees it for you, there is a memory leak here.
You can use a memory tool such as valgrind to test your program further. It will spot most memory problems for you and comes highly recommended.
One thing I've observed is that you often ask (paraphrased):
Is it okay to declare X outside my loop?
To which the answer actually is that declaring variables inside of your loops is better, whenever you can do it. Declare variables as deep inside as you can, unless you have to re-use them. Variables are restricted in scope to the outermost set of {} brackets. You can read more about this in this question.
is it ok to use same x1K and x2K instances for all objects?
In essence, when you do this, if one of these objects modifies its 'x1K' string, then it will change for all of them. The advantage is that you free up memory. If the string is the same for all these objects anyway, instead of having to store say 1,000,000 copies of it, your computer will only keep a single one in memory and have 1,000,000 pointers to it instead. If the string is 9 ASCII characters long or longer under amd64, then that amounts to significant memory savings.
By the way, if you don't intend to modify a variable after its declaration, you can declare it as const, a keyword short for constant which forces the compiler to check that your variable is not modified after declaration. You may have to deal with quite a few compiler errors about functions accepting only non-const versions of things they don't modify, some of which may not be your own code, in which case you're out of luck. Being as conservative as possible with non-const variables can help spot problems.

Is there an easier way to set/get values in a gSOAP request/response?

I am using gSOAP to configure an ONVIF compatible camera.
Currently, I am manually setting all the parameters in the request by doing something like this. This is for the SetVideEncoderConfiguration
MediaBindingProxy mediaDevice (uri);
AUTHENTICATE (mediaDevice);
_trt__SetVideoEncoderConfiguration req;
_trt__SetVideoEncoderConfigurationResponse resp;
struct tt__VideoEncoderConfiguration encoderConfig;
struct tt__VideoResolution resolutionConfig;
encoderConfig.Name = strdup (name);
encoderConfig.UseCount = 1;
encoderConfig.Quality = 50;
if (strcmp (encoding, "H264") == 0)
encoderConfig.Encoding = tt__VideoEncoding__H264;
else if (strcmp (encoding, "JPEG") == 0)
encoderConfig.Encoding = tt__VideoEncoding__JPEG;
encoderConfig.token = strdup (profileToken);
encoderConfig.SessionTimeout = (LONG64)"PT0S";
resolutionConfig.Width=1280;
resolutionConfig.Height=720;
encoderConfig.Resolution = &resolutionConfig;
tt__VideoRateControl rateControl;
rateControl.FrameRateLimit = 15;
rateControl.EncodingInterval = 1;
rateControl.BitrateLimit = 4500;
encoderConfig.RateControl = &rateControl;
struct tt__H264Configuration h264;
h264.GovLength = 30;
h264.H264Profile = tt__H264Profile__Baseline;
encoderConfig.H264 = &h264;
struct tt__MulticastConfiguration multicast;
struct tt__IPAddress address;
address.IPv4Address = strdup ("0.0.0.0");
multicast.Address = &address;
encoderConfig.Multicast = &multicast;
req.Configuration = &encoderConfig;
req.ForcePersistence = true;
int ret = mediaDevice.SetVideoEncoderConfiguration (&req, resp);
qDebug () << "Set Encoder: " << ret;
Is there an easier way to do this? May be some function calls that set the request parameters? Another way I found with GetMediaUri was to use something like
soap_new_req__trt__GetStreamUri (mediaDevice.soap,soap_new_req_tt__StreamSetup (mediaDevice.soap, (enum tt__StreamType)0, soap_new_tt__Transport(mediaDevice.soap), 1, NULL), "profile1");
Are these the only two ways for client side code with gSOAP?
-Mandar Joshi
There are four variations of soap_new_T() to allocate data of type T in C++ with gSOAP:
T * soap_new_T(struct soap*) returns a new instance of T that is default
initialized and allocated on the heap managed by the soap context.
T * soap_new_T(struct soap*, int n) returns an array of n new instances of
T on the managed heap. The instances in the array are default initialized as described above.
T * soap_new_req_T(struct soap*, ...) (structs and classes only) returns a
new instance of T allocated on the managed heap and sets the required data members to the values specified in the other arguments ....
T * soap_new_set_T(struct soap*, ...) (structs and classes only) returns a
new instance of T on the managed heap and sets the public/serializable data members to the values specified in the other arguments ....
Use soap_strdup(struct soap*, const char*) instead of strdup to dup strings onto the managed heap.
All data on the managed heap is mass-deleted with soap_destroy(soap) and
soap_end(soap) (call these in that order) which must be called before soap_done(soap) or soap_free(soap).
To allocate pointers to data, use templates:
template<class T>
T * soap_make(struct soap *soap, T val)
{
T *p = (T*)soap_malloc(soap, sizeof(T));
if (p)
*p = val;
return p;
}
template<class T>
T **soap_make_array(struct soap *soap, T* array, int n)
{
T **p = (T**)soap_malloc(soap, n * sizeof(T*));
for (int i = 0; i < n; ++i)
p[i] = &array[i];
return p;
}
Then use soap_make<int>(soap, 123) to create a pointer to the value 123 on the managed heap and soap_make_array(soap, soap_new_CLASSNAME(soap, 100), 100) to create 100 pointers to 100 instances of CLASSNAME.
The gSOAP tools also generate deep copy operations for you: CLASSNAME::soap_dup(struct soap*) creates a deep copy of the object and allocates it in a another soap context that you provide as argument. Use NULL as this argument to allocate unmanaged deep copies (but these cannot have pointer cycles!). Then delete unmanaged copies with CLASSNAME::soap_del() for deep deletion of all members and then delete the object itself.
See Memory management in C++ for more details. Use gSOAP 2.8.39 and greater.

Trying to use lambda functions as predicate for condition_variable wait method

I am trying to make the producer-consumer method using c++11 concurrency. The wait method for the condition_variable class has a predicate as second argument, so I thought of using a lambda function:
struct LimitedBuffer {
int* buffer, size, front, back, count;
std::mutex lock;
std::condition_variable not_full;
std::condition_variable not_empty;
LimitedBuffer(int size) : size(size), front(0), back(0), count(0) {
buffer = new int[size];
}
~LimitedBuffer() {
delete[] buffer;
}
void add(int data) {
std::unique_lock<std::mutex> l(lock);
not_full.wait(l, [&count, &size]() {
return count != size;
});
buffer[back] = data;
back = (back+1)%size;
++count;
not_empty.notify_one();
}
int extract() {
std::unique_lock<std::mutex> l(lock);
not_empty.wait(l, [&count]() {
return count != 0;
});
int result = buffer[front];
front = (front+1)%size;
--count;
not_full.notify_one();
return result;
}
};
But I am getting this error:
[Error] capture of non-variable 'LimitedBuffer::count'
I don't really know much about c++11 and lambda functions so I found out that class members can't be captured by value. By value though, I am capturing them by reference, but it seems like it's the same thing.
In a display of brilliance I stored the struct members values in local variables and used them in the lambda function, and it worked! ... or not:
int ct = count, sz = size;
not_full.wait(l, [&ct, &sz]() {
return ct != sz;
});
Obviously I was destroying the whole point of the wait function by using local variables since the value is assigned once and the fun part is checking the member variables which may, should and will change. Silly me.
So, what are my choices? Is there any way I can make the wait method do what it has to do, using the member variables? Or I am forced to not use lambda functions so I'd have to declare auxiliary functions to do the work?
I don't really get why I can't use members variables in lambda functions, but since the masters of the universe dessigned lamba functions for c++11 this way, there must be some good reason.
count is a member variable. Member variables can not be captured directly. Instead, you can capture this to achieve the same effect:
not_full.wait(l, [this] { return count != size; });

Resources