I have the geodetic coordinates of the stations, and the data flowing into the station is generated randomly. Is there any graph libary can draw this?
For the map you can use d3-geo and the d3.geoAlbersUsa projection.
To draw the line into the stations you can use a SVG path. You can generate the path with d3.geoPath.
Related
I am doing a task where I use the Canny edge detector to compute an edge image which has white pixels representing the edge, and then I will need the coordinates of the these edge pixels in the image to sent into another function.
The process of getting the coordinates of edge pixels from the edge image matrix is usually done with the cv::FindContours() of OpenCV, and the algorithm in this function is complicated and with many decisions, which is not differentiable. But now I want to use the process of turning edge image into 2d coordinates as a part in a deep learning model, so I need to have a differentiable and more straightforward process.
I couldn't find one, does anyone have any ideas? Thanks!
In OSMnx sosme operations, eg graph simplification, only work once the graph is properly projected, while some work correctly in lat/long. I would like to do soomething like:
project graph
operate on projected graph (eg simplify)
"unproject" result
operate on result in original coordinates
How do I do the unproject step?
I want to know the basic idea of creating 2d views of a 3d geometry in cads like autocad, solidworks, and etc..
Here, I listed some basic ideas that I had reached now.
Which method are they used ? or any method I didn't listed ?
idea A:
first, to render every single face to a plane space.
then detect the boundaries of faces.
do something magic that can recognize the 2d curves from the boundary pixels .
do something magic again to recognize which segments of curves should be hiddened.
construct a final view from lines and curves generated from above steps.
idea B:
they create projection rules for every type of surface with boundary wires, like plane, cylinder, sphere, spline. And thoes rules can be used in all projection angles.
then, implement projection rules for every face, and finally they got a view of many curves.
to iterate all curves generated from step 2, and check the visibility of the curve.
construct a final view.
idea C:
first, tessellate every faces to many triangles.
then, found boundaries from triangles for every faces.
then, we got many polylines from step 2.
to iterate all polylines generated for every faces, and check the visibility of the polylines.
construct a final view.
I found a solution, it follows this way:
tessellate every face and edge to triangles and segments.
project all those triangles and segments to a plane.
then choose a suitable resolution to construct those projected triangles and segments to pixels with a height parameter.
found contours for every face and edge from those pixels.
set visible value for every pixel on that contour depends on the height parameter of a total pixel's view.
reconstruct line, circle, and polylines from pixels.
I tested this method for some models, and works well. below is one of them:
I want to draw a network which can show the weight of each edge.
Currently, I was able to draw an unweighted graph using PUNGraph of SNAP.
G = snap.PUNGraph.New()
However, I wasn't able to find the class for the weighted graph.
I don't want to just put the value above the edge, I want to somehow resize the edge base on the value of the edges.
Could someone tell me how I can draw graph something like below?
If possible I would like a solution using SNAP.
I end up using nodeXL which is capable of drawing a graph likes below.
Also D3.js supports a lot of good representing features.
I am still looking for more fascinating tools.
Hie guys I'm working on an web app for feature extraction from an IGES(file format) CAD model. I have extracted and stored all the entities from the iges file e.g shell entity, face entity, loop, edges, and vertex entity. Also I managed to draw the model using three.js.
I followed the the algorithms in the document here until the point where we extract feature from the model itself. The document assumes that the edge between two faces is a line and so calculating the angles and edge concavity becomes easy:
The algorithm used is as follows:
In my case, however, the edge between the faces is a semi cylindrical in shape and this edge is detected by the app(I guess because of the entities in the iges file) as having 4 faces( the outer & inner faces and the faces on the sides). When I draw the wireframe model this is what is displayed:
I would like your help in finding the best algorithm to find the angles between two faces without using the edge face in the process just like in the first image( i.e as if the edge is just a line).
How do you calculate the edge direction( as stated in the above document)
THANK YOU.