Recently, I am about to starting to work on a project which involves refactoring and modifying existing code in c++.
My Project caches lots of objects which have Parent-child relationship between them, every object has a unique ID. See the following code:
class GetReleation
{
tempalte<typename T>
T* GetParent(const int &iID);
tempalte<typename T>
map<int, T*> GetChildren(const int &iID);
.....
}
class ObjectClass
{
int GetID();
int GetOhterProperty();
......
}
The objects cached in the project were never changed during running, so locks is avoid. And the following code works fine.
ObjectType pObject = getReleation.GetParent(iID);
pObject->GetOhterProperty();
But Now, objects may add/modify/delete during running, a new module is added to cached objects and manage objects change.
It has a read-write lock to protect multithread visit, all objects cache with a boost share_ptr.
The new module recommended direct functions to get property of object(for multithread safe use),
you should not get pointer of object, then use pointer to get property.
To achieve goals of get property of object, There are following methods:
1)NewModule ObjectVisitClass.GetProperty(iID);
There are so many class use member function of GetReleation, use new module directly means a lot of work to do.
2)use GetReleation, but change implement of member function.
T* GetParent(const int &iID)
{
Get boost share_ptr<T> from new module;
return boost share_ptr<T>::get();
}
user of GetReleation get pointer of object, in multithread will core dump, because this object may have been deleted in new module when user get the pointer of object.
we can return a new copy object of boost share_ptr::get()
T* GetParent(const int &iID)
{
Get boost share_ptr<T> from new module;
return new T(*boost share_ptr<T>::get());
}
This works fine, but means a lot of copy. And the property of object maybe out of date, because object may change in new module after user get copyed object(not the same object
in new module).
Please forgive my poor english, Is there a better way to implment this?
Any help will be appreciate, Thx!
Related
With reference to .Net Maui: How to read/write (get/set) a global object from any content page (MVVM)
I now have a need to pass an object that is rather a large (>500Mb), it is an OpenMap extract, a RouterDb object (Itinero http://docs.itinero.tech/index.html)
Though I am trying to use MVVM some packages (Mapsui) use code behind so I am left with a mixture of ViewModel and Codebehind. I can overcome the limitation of not having any kind of global object reference by using the MessagingCenter which works remarkably well.
I am wondering if this mechanism can handle the passing of large objects (megabytes) or if internally it simply passes a reference to the object. I suspect from my experiments that it is trying to pass a copy of the object and failing, as I end up with a null object.
Here's the code
MessagingMarker.cs - just a placeholder class
namespace RouteIt.Models;
public class MessagingMarker
{
}
In my MainPageViewModel.cs
//Takes about 10seconds to load
using (var stream = new FileInfo(#"C:\Users\Gordon\source\repos\RouteIt\Resources\Raw\gb.routerdb").OpenRead())
{
routerDb = RouterDb.Deserialize(stream);
}
//send message to codebehind containing object or reference?
MessagingCenter.Send(new MessagingMarker(), "RouterDbLoaded", routerDb);
and in my receiving MainPage.xaml.cs constructor
MessagingCenter.Subscribe<MessagingMarker, RouterDb>(this, "RouterDbLoaded", (sender, arg) =>
{
routerDb = arg;
});
router = new(routerDb);
Any thoughts? (And yes perhaps I need to re-struture my app, but at this stage I don't want to really, one day Mapsui might support mvvm apparently.)
As always thanks for at least reading :)
This post is about exposing C++ objects to the v8 javascript engine. To attach a C++ object to a javascript object, I make use of the GetInternalField() and External APIs. Before you can set or get any internal field, you have to call SetInternalFieldCount() on the corresponding ObjectTemplate. Since I want to expose a constructor function to the JS, I created a FunctionTemplate, set a C++ function that attache the native object to the JS object to that template, and finally SetInternalCount() on the InstanceTemplate() of that function template. Too much words for the description, here is what I did:
struct Point {
int x, y;
Local<FunctionTemplate> CreatePointContext(Isolate* isolate) {
Local<FunctionTemplate> constructor = FunctionTemplate::New(isolate, &ConstructorCallback);
constructor->InstanceTemplate()->SetInternalFieldCount(1); // I set internal field count here.
constructor->SetClassName(String::NewFromUtf8(isolate, "Point", NewStringType::kInternalized).ToLocalChecked());
auto prototype_t = constructor->PrototypeTemplate();
prototype_t->SetAccessor(String::NewFromUtf8(isolate, "x", NewStringType::kInternalized).ToLocalChecked(),
XGetterCallback);
return constructor;
};
// This callback is bound to the constructor to attach a C++ Point instance to js object.
static void ConstructorCallback(const FunctionCallbackInfo<Value>& args) {
auto isolate = args.GetIsolate();
Local<External> external = External::New(isolate, new Point);
args.Holder()->SetInternalField(0, external);
}
// This callback retrieves the C++ object and extract its 'x' field.
static void XGetterCallback(Local<String> property, const PropertyCallbackInfo<Value>& info) {
auto external = Local<External>::Cast(info.Holder()->GetInternalField(0)); // This line triggers an out-of-bound error.
auto point = reinterpret_cast<Point*>(external->Value());
info.GetReturnValue().Set(static_cast< double>(point->x));
}
// This function creates a context that install the Point function template.
Local<Context> CreatePointContext(Isolate* isolate) {
auto global = ObjectTemplate::New(isolate);
auto point_ctor = Point::CreateClassTemplate(isolate);
global->Set(isolate, "Point", point_ctor);
return Context::New(isolate, nullptr, global);
}
When I tried to run the following JS code with the exposed C++ object, I got Internal field out of bounds error.
var p = new Point();
p.x;
I wonder setting internal field count on the instance template of a function template has nothing to do with the object created by the new expression. If so, what is the correct way to set the internal field count of the object created by new while exposing the constructor function to javascript? I want to achieve the following 2 things:
In javascript, a Point function is avaible so we can var p = new Point;.
In C++ I can make sure the JS object has 1 internal field for our C++ Point to live in.
Edit: As #snek pointed out, I changed Holder() to This() and everything started to work. But later When I changed SetAccessor to SetAccessorProperty, it worked even with Holder.
Although the behaviour are very confusing, I think the major problem may not lie in the difference between Holder and This, but rather in SetAccessor and SetAccessorProperty. Why? Because in many docs I have read, Holder should be identical to This in most cases and I believe without using Signature and given that my testing js code is so simple (not with any magic property moving), in my case This should just be Holder.
Thus I decided to post another question about SetAccessor and SetAccessorProperty and leave this post as a reference.
For why I am so sure about in my case This() == Holder() should hold, here are some old threads:
https://groups.google.com/forum/#!topic/v8-users/fK9PBWxJxtQ
https://groups.google.com/forum/#!topic/v8-users/Axf4hF_RfZo
And what does the docs say?
/**
* If the callback was created without a Signature, this is the same
* value as This(). If there is a signature, and the signature didn't match
* This() but one of its hidden prototypes, this will be the respective
* hidden prototype.
*
* Note that this is not the prototype of This() on which the accessor
* referencing this callback was found (which in V8 internally is often
* referred to as holder [sic]).
*/
V8_INLINE Local<Object> Holder() const;
Note in my code there is not Signature, literally. So This and Holder should make no difference, but with SetAccessor, they made a difference.
I have a lot of native classes that accept some form of callbacks, usually a boost::signals2::slot-object.
But for simplicity, lets assume the class:
class Test
{
// set a callback that will be invoked at an unspecified time
// will be removed when Test class dies
void SetCallback(std::function<void(bool)> callback);
}
Now I have a managed class that wraps this native class, and I would like to pass a callback method to the native class.
public ref class TestWrapper
{
public:
TestWrapper()
: _native(new Test())
{
}
~TestWrapper()
{
delete _native;
}
private:
void CallbackMethod(bool value);
Test* _native;
};
now usually what I would do is the following:
Declare a method in the managed wrapper that is the callback I want.
Create a managed delegate object to this method.
Use GetFunctionPointerForDelegate to obtain a pointer to a function
Cast the pointer to the correct signature
Pass the pointer to the native class as callback.
I also keep the delegate alive since I fear it will be garbage collected and I will have a dangling function pointer (is this assumption correct?)
this looks kind of like this:
_managedDelegateMember = gcnew ManagedEventHandler(this, &TestWrapper::Callback);
System::IntPtr stubPointer = Marshal::GetFunctionPointerForDelegate(_managedDelegateMember);
UnmanagedEventHandlerFunctionPointer functionPointer = static_cast<UnmanagedEventHandlerFunctionPointer >(stubPointer.ToPointer());
_native->SetCallback(functionPointer);
I Would like to reduce the amount of code and not have to perform any casts nor declare any delegate types. I want to use a lambda expression with no delegate.
This is my new approach:
static void SetCallbackInternal(TestWrapper^ self)
{
gcroot<TestWrapper^> instance(self);
self->_native->SetCallback([instance](bool value)
{
// access managed class from within native code
instance->Value = value;
}
);
}
Declare a static method that accepts this in order to be able to use C++11 lambda.
Use gcroot to capture the managed class in the lambda and extend its lifetime for as long as the lambda is alive.
No casts, no additional delegate type nor members, minimal extra allocation.
Question:
Is this approach safe? I'm fearing I'm missing something and that this can cause a memory leak / undefined behavior in some unanticipated scenario.
EDIT:
this approach leads to a MethodAccessException when the lambda calls a private method of its managed wrapper class. seems like this method must at least be internal.
I think that you should not be using gcroot but a shared pointer. Shared pointer are made to keep an object alive as long as someone is using it.
You should also use a more c++ style in your whole code by replacing raw pointer with smart pointer and template instead of std::function (a lambda can be stored in a compile time type).
For example using the code you posted :
class Test
{
// set a callback that will be invoked at an unspecified time
// will be removed when Test class dies
template <class T>
void SetCallback(T callback); // Replaced std::function<void(bool)> with T
}
public ref class TestWrapper
{
public:
TestWrapper()
: _native()
{}
private:
void CallbackMethod(bool value);
std::unique_ptr<Test> _native; // Replaced Test* with std::unique_ptr<Test>
};
After replacing the old method with this new method all over my code base, I can report that it is safe, more succinct, and as far as I can tell, no memory leaks occur.
Hence I highly recommend this method for passing managed callbacks to native code.
The only caveats I found were the following:
Using lambda expressions forces the use of a static method as a helper for the callback registration. This is kinda hacky. It is unclear to me why the C++-CLI compiler does no permit lambda expressions within standard methods.
The method invoked by the lambda must be marked internal so to not throw MethodAccessException upon invocation. This is sort of make sense as it is not called within the class scope itself. but still, delegates / lambdas with C# don't have that limitation.
We're using protobuf-net for sending log messages between services. When profiling stress testing, under high concurrency, we see very high CPU usage and that TakeLock in RuntimeTypeModel is the culprit. The hot call stack looks something like:
*Our code...*
ProtoBuf.Serializer.SerializeWithLengthPrefix(class System.IO.Stream,!!0,valuetype ProtoBuf.PrefixStyle)
ProtoBuf.Serializer.SerializeWithLengthPrefix(class System.IO.Stream,!!0,valuetype ProtoBuf.PrefixStyle,int32)
ProtoBuf.Meta.TypeModel.SerializeWithLengthPrefix(class System.IO.Stream,object,class System.Type,valuetype ProtoBuf.PrefixStyle,int32)
ProtoBuf.Meta.TypeModel.SerializeWithLengthPrefix(class System.IO.Stream,object,class System.Type,valuetype ProtoBuf.PrefixStyle,int32,class ProtoBuf.SerializationContext)
ProtoBuf.ProtoWriter.WriteObject(object,int32,class ProtoBuf.ProtoWriter,valuetype ProtoBuf.PrefixStyle,int32)
ProtoBuf.BclHelpers.WriteNetObject(object,class ProtoBuf.ProtoWriter,int32,valuetype
ProtoBuf.BclHelpers/NetObjectOptions)
ProtoBuf.Meta.TypeModel.GetKey(class System.Type&)
ProtoBuf.Meta.RuntimeTypeModel.GetKey(class System.Type,bool,bool)
ProtoBuf.Meta.RuntimeTypeModel.FindOrAddAuto(class System.Type,bool,bool,bool)
ProtoBuf.Meta.RuntimeTypeModel.TakeLock(int32&)
[clr.dll]
I see that we can use the new precompiler to get a speed boost, but I'm wondering if that will get rid of the issue (sounds like it doesn't use reflection); it would be a bit of work for me to integrate this, so I haven't tested it yet. I also see the option to call Serializer.PrepareSerializer. My initial (small scale) testing didn't make the prepare seem promising.
A little more info about the type we're serializing:
[ProtoContract]
public class SomeMessage
{
[ProtoMember(1)]
public SomeEnumType SomeEnum { get; set; }
[ProtoMember(2)]
public long SomeId{ get; set; }
[ProtoMember(3)]
public string SomeString{ get; set; }
[ProtoMember(4)]
public DateTime SomeDate { get; set; }
[ProtoMember(5, DynamicType = true, OverwriteList = true)]
public Collection<object> SomeArguments
}
Thanks for your help!
UPDATE 9/17
Thanks for your response! We're going to try the workaround you suggest and see if that fixes things.
This code lives in our logging system so, in the SomeMessage example, SomeString is really a format string (e.g. "Hello {0}") and the SomeArguments collection is a list of objects used to fill in the format string, just like String.Format. Before we serialize, we look at each argument and call DynamicSerializer.IsKnownType(argument.GetType()), if it isn't known, we convert it to a string first. I haven't looked at the ratios of data, but I'm pretty sure we have a lot of different strings coming in as arguments.
Let me know if this helps. If you need, I'll try to get more details.
TakeLock is only used when it is changing the model, for example because it is seeing a type for the first time. You shouldn't normally see TakeLock after the first time a particular type has been used. In most cases, using Serializaer.PrepareSerializer<SomeMessage>() should perform all the necessary initialization (and similar for any other contracts you are using).
However! I wonder if perhaps this is also related to your use of DynamicType; what are the actual objects being used here? It might be that I need to tweak the logic here, so that it doesn't spend any time on that step. If you let me know the actual objects (so I can repro), I will try to run some tests.
As for whether the precompiler would change this; yes it would. A fully compiled static model has a completely different implementation of the ProtoBuf.Meta.TypeModel.GetKey method, so it would never call TakeLock (you don't need to protect a model that can never change!). But you can actuallydo something very similar without needing to use precompile. Consider the following, run as part of your app's initialization:
static readonly TypeModel serializer;
...
var model = TypeModel.Create();
model.Add(typeof(SomeMessage), true);
// TODO add other contracts you use here
serializer = model.Compile();
This will create a fully static-compiled serializer assembly in memory (instead of a mutable model with individual operations compiled). If you now use serializer.Serialize(...) instead of Serializer.Serialize (i.e. the instance method on your stored TypeModel rather than the static method on Serializer) then it will essentially be doing something very similar to "precompiler", but without the need to actualy precompile it (obviously this will only be available on "full" .NET). This will then never call TakeLock, as it is running a fixed model, rather than a flexible model. It does, however, require you to know what contract-types you use. You could use reflection to find these, by looking for all those types with a given attribute:
static readonly TypeModel serializer;
...
var model = TypeModel.Create();
Type attributeType = typeof(ProtoContractAttribute);
foreach (var type in typeof(SomeMessage).Assembly.GetTypes()) {
if (Attribute.IsDefined(type, attributeType)) {
model.Add(type, true);
}
}
serializer = model.Compile();
But emphasis: the above is a workaround; it sounds like there's a glitch, which I'll happily investigate if I can see an example where it actually happens; most importantly: what are the objects in SomeArguments?
I'd like to change the representation of C# Doubles to rounded Int64 with a four decimal place shift in the serialization C# Driver's stack for MongoDB. In other words, store (Double)29.99 as (Int64)299900
I'd like this to be transparent to my app. I've had a look at custom serializers but I don't want to override everything and then switch on the Type with fallback to the default, as that's a bit messy.
I can see that RegisterSerializer() won't let me add one for an existing type, and that BsonDefaultSerializationProvider has a static list of primitive serializers and it's marked as internal with private members so I can't easily subclass.
I can also see that it's possible to RepresentAs Int64 for Doubles, but this is a cast not a conversion. I need essentially a cast AND a conversion in both serialization directions.
I wish I could just give the default serializer a custom serializer to override one of it's own, but that would mean a dirty hack.
Am I missing a really easy way?
You can definitely do this, you just have to get the timing right. When the driver starts up there are no serializers registered. When it needs a serializer, it looks it up in the dictionary where it keeps track of the serializers it knows about (i.e. the ones that have been registered). Only it it can't find one in the dictionary does it start figuring out where to get one (including calling the serialization providers) and if it finds one it registers it.
The limitation in RegisterSerializer is there so that you can't replace an existing serializer that has already been used. But that doesn't mean you can't register your own if you do it early enough.
However, keep in mind that registering a serializer is a global operation, so if you register a custom serializer for double it will be used for all doubles, which could lead to unexpected results!
Anyway, you could write the custom serializer something like this:
public class CustomDoubleSerializer : BsonBaseSerializer
{
public override object Deserialize(BsonReader bsonReader, Type nominalType, Type actualType, IBsonSerializationOptions options)
{
var rep = bsonReader.ReadInt64();
return rep / 100.0;
}
public override void Serialize(BsonWriter bsonWriter, Type nominalType, object value, IBsonSerializationOptions options)
{
var rep = (long)((double)value * 100);
bsonWriter.WriteInt64(rep);
}
}
And register it like this:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
You could test it using the following class:
public class C
{
public int Id;
public double X;
}
and this code:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
var c = new C { Id = 1, X = 29.99 };
var json = c.ToJson();
Console.WriteLine(json);
var r = BsonSerializer.Deserialize<C>(json);
Console.WriteLine(r.X);
You can also use your own serialization provider to tell Mongo which serializer to use for certain types, which I ended up doing to mitigate some of the timing issues mentioned when trying to override existing serializers. Here's an example of a serialisation provider that overrides how to serialize decimals:
public class CustomSerializationProvider : IBsonSerializationProvider
{
public IBsonSerializer GetSerializer(Type type)
{
if (type == typeof(decimal)) return new DecimalSerializer(BsonType.Decimal128);
return null; // falls back to Mongo defaults
}
}
If you return null from your custom serialization provider, it will fall back to using Mongo's default serialization provider.
Once you've written your provider, you just need to register it:
BsonSerializer.RegisterSerializationProvider(new CustomSerializationProvider());
I looked through the latest iteration of the driver's code and checked if there's some sort of backdoor to set custom serializers. I am afraid there's none; you should open an issue in the project's bug tracker if you think this needs to be looked at for future iterations of the driver (https://jira.mongodb.org/).
Personally, I'd open a ticket -- and if a quick workaround is necessary or required, I'd subclass DoubleSerializer, implement the new behavior, and then use Reflection to inject it into either MongoDB.Bson.Serialization.Serializers.DoubleSerializer.__instance or MongoDB.Bson.Serialization.BsonDefaultSerializationProvider.__serializers.