I am having tough time understanding the distinction between Envoy and Consul. What are the use cases for each one and what are the advantages ?
They both seem to be providing service mesh, observability and load balancing.
Consul is a service mesh control plane which uses Envoy as its data plane proxy.
Data plane vs. control plane summary
Service mesh data plane: Touches every packet/request in the system. Responsible for service discovery, health checking, routing, load balancing, authentication/authorization, and observability.
Service mesh control plane: Provides policy and configuration for all of the running data planes in the mesh. Does not touch any packets/requests in the system. The control plane turns all of the data planes into a distributed system.
The above quote is from a blog post by the creator of Envoy, Matt Klein, entitled Service mesh data plane vs. control plane. I recommend reading the post in its entirety to better understand the role of a control plane & data plane within a service mesh.
I also recommend watching this video, Introduction to HashiCorp Consul Connect, for specifics of how Consul service mesh works.
Related
I was trying to perform 3D Reconstruction from the point cloud being obtained from ARCore. However, The point cloud I was able to obtain using ARCore was not accurate or dense enough to perform 3D Reconstruction. Specifically, Points that were supposed to be on the same surface were off by a few miilimeteres due to which the surface looked as if it had a thickness.
Am I isolating the point cloud correctly or is it a limitation of ARCore ?
What should be the standard approach towards isolating the point cloud and for 3D Reconstruction ?
I am attaching below the point cloud obtained from a laptop.
( The file is in the .PLY format. The file can be viewed on https://sketchfab.com/ ,http://www.meshlab.net/ or any other software capable of rendering 3D models.)
The Keyboard and the screen of the laptop here look as if they have a thickness, although all the points were supposed to be at the same depth.
Please have a look at the point cloud and guide me as to what is going wrong here, since I have been stuck at it for quite some time now.
Thank you
https://drive.google.com/file/d/18KMchFgYd8KOcyi8hPpB5yEfbnJ6DxmR/view?usp=sharing
Does project tango extract any visual features per frame (such as ORB or SIFT/SURF)? or the entire point cloud is just 3D points extracted from depth camera. If so, is it possible to know which algorithm are they using? Is it just corners ?
I would like to dump 3D point cloud along with corresponding features and wondering if it is all possible in real-time.
Unfortunately, they don't expose which features they use. All you get is XYZ + Confidence. Here's the realtime point cloud callback, from the C API:
TangoErrorType TangoService_connectOnPointCloudAvailable(
void(*)(void *context, const TangoPointCloud *cloud) TangoService_onPointCloudAvailable,
...
);
See:
https://developers.google.com/tango/apis/c/reference/group/depth
https://developers.google.com/tango/apis/java/reference/TangoPointCloudData
https://developers.google.com/tango/apis/unity/reference/class/tango/tango-unity-depth
TangoPointCloud is defined here:
https://developers.google.com/tango/apis/c/reference/struct/tango-point-cloud#struct_tango_point_cloud
https://developers.google.com/tango/apis/java/reference/TangoPointCloudData
https://developers.google.com/tango/apis/unity/reference/class/tango/tango-point-cloud-data
As an aside, if you regard Tango's objective as being a portable API that sits atop various different sensors and hardware platforms, then it makes sense that they wouldn't expose the details of the underlying depth estimation method. It might change, from one device to the next.
BTW, they also keep the internals of their ADF (Area Description File) format secret.
I have a point cloud of an object, obtained with a laser scanner, and a CAD surface model of that object.
How can I match the point cloud to the surface, to obtain the translation and rotation between cloud and model?
I suppose I could sample the surface and try the Iterative Closest Point (ICP) algorithm to match the resulting sampled point cloud to the scanner point cloud.
Would that actually work?
And are there better algorithms for this task?
In new OpenCV, I have implemented a surface matching module to match a 3D model to a 3D scene. No initial pose is required and the detection process is fully automatic. The model also involves an ICP.
To get an idea, please check that out a video here (though it is not generated by the implementation in OpenCV):
https://www.youtube.com/watch?v=uFnqLFznuZU
The full source code is here and the documentation is here.
You mentioned that you needed to sample your CAD model. This is correct and we have given a sampling algorithm suited for point pair feature matching, such as the one implemented in OpenCV:
Birdal, Tolga, and Slobodan Ilic. A point sampling algorithm for 3D matching of irregular geometries. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017.
http://campar.in.tum.de/pub/tbirdal2017iros/tbirdal2017iros.pdf
Yes, ICP can be applied to this problem, as you suggest with sampling the surface. It would be best if you have all available faces in your laser scan otherwise you may have to remove invisible faces from your model (depending on how many of these there are).
One way of automatically preparing a model by getting rid of some of the hidden faces is to calculate the concave hull which can be used to discard hidden faces (which are for example faces that are not close to the concave hull). Depending on how involved the model is this may or may not be necessary.
ICP works well if given a good initial guess because it ignores points that are not close with respect to the current guess. If ICP is not coming up with a good alignment you may try it with multiple random restarts to try and fix this problem, choosing the best alignment.
A more involved solution is to do local feature matching. You sample and calculate an invariant descriptor like SHOT or FPFH. You find the best matches, reject non-consistent matches, use them to come up with a good initial alignment and then refine with ICP. But you may not need this step depending on how robust and fast the random-restart ICP is.
There's an open source library for point cloud algorithms which implements registration against other point clouds. May be you can try some of their methods to see if any fit.
As a starter, if they don't have anything specific to fit against a polygon mesh, you can treat the mesh vertices as another point cloud and fit your point cloud against it. This is something that they definitely support.
I am using Three.js Raycaster method in my web based car race game. But due to the heavy computations it is consuming a lot of CPU Cycles hence leading to a drop in fps. I am thinking of exporting the RayCaster method of Three.js on WebWorker. Can anyone guide me how to accomplish it , or is it possible at all ?
There was a question about offloading merge geometry method to a web worker, this should help you get your head around the problem.
While the merge geometry method is not ideal for a web worker, other things like physics and perhaps your ray cast method are.
Merging geometries using a WebWorker?
The key being that whatever work you do in the web worker will have to be sent to the main thread as an array of floats.
So you will need to pack your data up and unpack it on the other end.
This works well for physics engines when they are responding with x,y,z coordinates and the entire system is simulated in the web worker and positions are passed back to the main thread for rendering.
Did somebody used OctreeSceneManager in Ogre? Is it faster for rendeing then Generic Scene Manager? I use OpenGL rendering system.
(Better late than never ;-) )
Quoting the Ogre Wiki, an OSM (Octree Scene Manager)
Uses an octree to split the scene and performs well for most scenes, except those which are reliant on heavy occlusion.
Now there's one big difference between a GSM (Generic Scene Manager) and an OSM:
The GSM relies on you, the developer, to build a scene node hierarchy while the OSM builds an octree of all scene nodes automatically in the background.
This means if you're going to add all your scene nodes to the root node, the GSM will be much slower1.
The usage of both remains the same, you're still free to order your nodes as you want.
When you do a lot of frustum culling or (ray) scene queries you may want to use the OSM, as it will perform better2 (but just remember to not bunch all your scene nodes together under the root node).
Unless you have a scene which is reliant on heavy occlusion or you have to add all your scene nodes directly under the root node I'd prefer the OSM.
1 - GSM vs. OSM
2 - How to use OSM