How to add labels to vector tiles in Postgis - label

I'm creating vector tiles with Postgis (ST_AsMVT()) and rendering them with OpenLayers (ol/layer/VectorTile) and styling them with style file (ol-mapbox-style).
I can add points, lines and polygons to tiles in Postgis and style and show them in OpenLayers. But I don't know how to add labels to tiles in Postgis:
Name of lakes, road numbers and names which follows the road segment etc... Is there examples or tutorials available somewhere?

Related

Creating Heatmap Over 3D Model From Vector 3 Point Data

I am attempting to render a flat, dynamically created heatmap on top of a 3D model that is loaded from an OBJ (or STL).
I am currently loading and rendering an OBJ with Three.js. I have vector3 points that I am currently drawing as simple red cubes (image below). These data points are all raycasted to my OBJs mesh and are lying on the surface. The vector3 points are loaded from an external data source and will change depending on what data is being viewed/collected.
I would like to render my vector3 point data into a heatmap on the surface of my OBJ. Here are some examples illustrating the type of visual effects I am trying to achieve:
I feel like vertex coloring is the method of achieving this, but my issue is that my OBJ model does not have enough tessellation to do this. As you can see many red dots fall on each face. I am struggling to find a way to draw over my object's mesh with colors exactly where my red point data is. I was assuming I would need to convert my random vector3 points into a mesh, but cannot find a method to do so.
I've looked at the possibility of generating a texture, but 1) I do not have a UV map for my OBJs and do not see a way to programmatically generate them and 2) I am a bit lost on how I would correlate vector3 point data to UV points.
I've looked at using shaders, but my vector3 point data appears to be too large for using a shader (could be hundreds of thousands of points). I also feel it is not the right approach to render the heatmap every frame and would rather only render it once on load.
I've looked into isosurfaces with point clouds and the marching cubes algorithm, but I didn't think this was the right direction since only my data is a bit like a point cloud, and I am unsure as to how I would keep this smooth along the surface of my OBJ mesh.
Although I would prefer to keep everything in JavaScript for viewing in the browser, I am open to doing server side processing in any language/program with REST so long as it can be automated without human intervention, and pushed back to the browser for rendering.
Any suggestions or guidance is appreciated.
I'm only guessing but it seems like first you need to have UV coordinates that map every triangle to a texture. Rather than do this by hand I'd suggest using a modeling package. Most modeling packages have some way of automatically and uniformly mapping every triangle to a texture. For example in Blender
Next to put the heatmap in the texture by computing which triangles are affected by each dot (your raycasting), looking up their texture coordinates, projecting that dot into texture space and then putting the colors in that part of the texture. I'm only guessing that you need to not just do exact points but probably need to consider adjacent triangles since some heat info that hits near the edge of a triangle needs to bleed over into the adjacent triangle but that adjacent triangle might be using a completely different part of the texture.

How to convert a picture into different view by the test position using ray tracing

Now I want train a path loss model, and I have a map picture, and I want to convert this map into different views by the test location(x,y)
I need a conversion algorithm to produce a lot different map views by the test location.Now I can show a example of this(I am sorry this hard to describe)
in the left up is the map with 4 column,in the right bottom is the convert-new-map:
I want to use some "light resource"(the location A) to project onto the building in the map, then some light will be blocked, then we will get the shadow in this test location.
so the shadow from the AP location and test location can present the environment information in this area.
If you have some idea to solve this, please let me know.
Thanks in advance
Cheng Hong
After discussing and googling, I find out that I should using some ray tracing technology for a 2D map.
In my research, I have two point, location A and location P in a map.
And now I want to use ray tracing to convert the map combining the two locations into a new map view.
In this new map view, the location A point is in the center, then some shadow will be added resulting from the building(call it black column) in the origin map. Then this new map is a kind of presentation or describer for the map and two location point. That is what I want to do.
you need to add more specs like the map is an raster image or vector? This has nothing to do with conversion (hence the retag) you just want to render your 2D map as 3D scene or its 2D slice (single horizontal line) this can be done really easily.
raster map
google Wolfenstein ray casting rendering techniques like:
Algorithm for 2D Raytracer
vector map
construct mesh from your map and render by any 3D gfx api like OpenGL. To get started with this approach you need to grasp this:
Understanding 4x4 homogenous transform matrices
see also the sub-links in there ...
To implement the lighting condition you can implement any kind of shading. The easiest is normal shading. For more info see:
Normal shading this may enlight thing or two (for beginners)
Normal/Bump mapping see fragment shader and search the dot
mirrored light see for slightly more complex lighting scheme
simple complete GL+VAO/VBO+GLSL+shaders example in C++
Curved Frosted Glass Shader? for sub surface scattering

Adding text in three.js

I am visualizing a graph using Three.js and for each node of the graph I add a label using TextGeometry. It is a pretty small graph but when I add text my application gets really slow. What should I do about it?
TextGeometry is more suitable for cases when you are really interested in rendering the text in 3D. It will create complex geometry that will surely slow your app down specially when there is a lot of text or you use CanvasRenderer.
For labels, it is generally better to use 2D labels, which are way faster to render. There are many different approaches to this. These can go on top of the Three.js rendering canvas, on a separate canvas, or even normal HTML nodes positioned using CSS properties. Alternatively, you can dynamically create small canvases of your label texts, and use them as sprite textures always facing camera - this might be the easiest way as the labels would be part of the 3D scene as your other objects. For a separate layer approach, you need to use unprojectVector or such to figure out screen XY coordinates to match your 3D scene positions.
See these SO posts for example:
- Dynamically create 2D text in three.js
- Canvas and SpriteMaterial
- How do I add a tag/label to appear on top of several objects so that the tag always faces the camera when the user clicks the object?

Convert 2D planes to 3D model

We have a multiple 2D planar image of an object scanned from a fan-beam perspective. An example is in Fig 5 below. We have multiple grainy dotted planes to scan the whole image.
The issue with these images is that they cannot be directly mapped to a 3D plane due to the fan beam deformation.
Is there correction algorithms/methods that can be recommended so that these planes can be correctly mapped to 3D plane and an object can be reconstructed properly?
Depending on how you store your data, there might be various approaches. Guessing that you store the data as points ("grainy dotted planes"), you can do an interpolation of the corresponding points in the consecutive planes and thereby get the scan of the entire object. It does require the points to be in the same frame, so you might have to do some kind of transformation to find the parameters of each plane to a global framework.
Another procedure might be use of a least square fitting of each plane which can then be used to map together the object. You might find some helpful approaches of scanning 3d objects using 2d methods. Hope this helps.

In GLGE, Is it possible specify the face of a mesh that a texture should be mapped to? (WEBGL as well)

I'm trying to make an environment map which is in the form of a cube that has images mapped onto particular faces to give the illusion of being in the area (sorta like google's street view)
I'm trying to do It in glgehowever, with my limited experience, I only know how to map one texture to a whole mesh (Which is what I'm doing at the moment). If I were to create 6 different textures, would it possible for me to specify the faces that those textures should be loaded to?
You could generate the six faces of the cube as separate objects and use a different texture for each. Alternative is to set different texture coordinates for the different faces of the cube.
If you want ready-to-run code, three.js has a couple of skybox examples. E.g. http://mrdoob.github.com/three.js/examples/webgl_panorama_equirectangular.html
You should look at "UV Mapping". Check this example. Roughly, UVs describe how the polygons are mapped (in x,y) on the texture.
Sounds like you want a cube map texture — it takes six separate images, and you lookup in it with a direction vector rather than (u,v) coordinates. They are the usual way to do environments. Cube map textures are available in WebGL.

Resources