Plotting circles and arc without segmentation (PLT format) - autocad

I'm having problems plotting vector objects such as arcs and circles from Autodesk without forcing them into set of lines. PLT (HPGL) file format supports both circles and arcs defined through various means, but for some reason, the output uses just lines.
The drawing I've created is really simple, just for testing purposes
and the output from plotting (I've separated commands into single lines so it's more readable)
.(;.I81;;17:.N;19:IN;
SC;
PU;
RO90;
IP;
IW;
VS20,1VS20,2VS20,3VS20,4VS20,5VS20,6VS20,7VS20,8SP1;
PU;
PA0,0;
SP1;
LT;
PA-4985,2256;
PDPA-2985,256,-2985,-1744,-4985,256,-4985,2256;
PUPA-4967,-2572;
PDPA-4862,-2569,-4757,-2563,-4653,-2552,-4549,-2538,-4446,-2520,-4343,-2498,-4241,-2473,-4141,-2443,-4041,-2410,-3943,-2373,-3846,-2333,-3751,-2289,-3657,-2241,-3566,-2190,-3476,-2136,-3388,-2078,-3303,-2017,-3219,-1953,-3139,-1886,-3060,-1816,-2985,-1744;
PUPA-1530,-26;
PDPA-1472,-24,-1414,-18,-1357,-8,-1300,5,-1245,22,-1190,42,-1137,66,-1086,94,-1037,124,-989,158,-944,195,-902,235,-862,277,-825,322,-791,369,-761,419,-733,470,-709,523,-689,577,-672,633,-659,690,-649,747,-644,805,-642,863,-644,921,-649,979,-659,1036,-672,1093,-689,1148,-709,1203,-733,1256,-761,1307,-791,1356,-825,1404,-862,1449,-902,1491,-944,1531,-989,1568,-1037,1602,-1086,1632,-1137,1660,-1190,1684,-1245,1704,-1300,1721,-1357,1734,-1414,1744,-1472,1749,-1530,1751,-1588,1749,-1646,1744,-1703,1734,-1760,1721,-1816,1704,-1870,1684,-1923,1660,-1974,1632,-2024,1602,-2071,1568,-2116,1531,-2158,1491,-2198,1449,-2235,1404,-2269,1356,-2299,1307,-2327,1256,-2351,1203,-2371,1148,-2388,1093,-2401,1036,-2411,979,-2417,921,-2418,863,-2417,805,-2411,747,-2401,690,-2388,633,-2371,577,-2351,523,-2327,470,-2299,419,-2269,369,-2235,322,-2198,277,-2158,235,-2116,195,-2071,158,-2024,124,-1974,94,-1923,66,-1870,42,-1816,22,-1760,5,-1703,-8,-1646,-18,-1588,-24,-1530,-26;
PU;
PA0,0;
SP;
Now the command for circle in PLT is really simple - CI with 3 parameters - center X, Y and radius. Instead it was substituted with bunch of lines.
I've tried various HP printers, but this seems to make no difference as the driver support is always the same, so I settled with HP 7585B. I've also tried increasing quality but this only resulted in more points.
Is there any way to get 1:1 (as by shape) vector graphics from AutoCAD to PLT? Or is there any really simple file format like PLT that would support this?

Use DXF instead of PLT. You will got your 1:1 mapping.

Related

Creating a fence diagram in Mayavi or Matplotlib

working with Matplotlib I have produced some resistivity cross sections of the soil, obtaining pictures like this:
Now I would like to display all those sections in 3D so as to visualise better the spatial distribution of resistivity in the field (i.e. a so-called fence diagram). I would also like to plot the 2D map of the site where those measurements were carried out at the base of my plot (say on the XY plane).
As far as I have seen this is not feasible (or at least not convenient) with Matplotlib in 3D hence I decided to switch to Mayavi.
My questions are:
is it feasible georeferenced rasters and then properly place them on the correct (vertical) planes (not necessarily parallel to the cartesian ones) with Mayavi? Does imshow() serves this purpose?
is it better to recreate the contours in Mayavi at the proper locations? If this is the case I did not find a function to create contours from unstructured data (the input images were created with tricontour/tricontourf in Matplotlib). I do not think interpolating over a structured grid in scipy would do, given the non convex domain.
Ok, answering my own question:
mesh = mlab.triangular_mesh
surf = mlab.pipeline.surface(mesh)
seems to do the job.
To be consistent with the previous work, the triangulation, duly masked, can be directly imported from Matplotlib.

geopandas rasterize shpefile

I am looking for the very simplest way to rasterise a shpfile in geopandas - the equivalent to arcpy PolygonToRaster_conversion() which does things in one line.
I have found some relatively involved methods eg
https://snorfalorpagus.net/blog/2014/11/09/masking-rasterio-layers-with-vector-features/
is it this complicated? or is there a one line option like arcpy's PolygonToRaster_conversion()
I'm looking for the simplest starting point to get the idea
I've been exploring rasterio to do this, but perhaps there are other ways
I'm only just starting to use Geopandas and would appreciate any pointers
Are you trying to rasterize a set of polygons with unique values in one step? If so, you want to rasterize using that unique value for each polygon, but beware that the last polygon rasterized to a given pixel will "claim" it (i.e., multiple polygons may touch a pixel, but the last one in your list of features will be the value rasterized there).
Or do you want to rasterize each polygon independently (or all polygons at the same time, as if they were a single polygon), so that you can extract out statistics from the raster? Mask may work for this, in a loop over each feature.
The closest you are likely to get to a one-line operation is using rasterio's rio mask or rio rasterize operation. The reason that the example you link to is more involved is that you need to do a few extra things to extract a subset of your original raster. There are now a few extra methods in rasterio that make that a bit easier (docs).
From geopandas, your geometry is in a GeoSeries. I haven't tested this directly, but you may need to call the __geo_interface__ of the series to get back GeoJSON-like shapes that rasterio expects as input.

how to add text labels to figs and merge them together

I have two figs fig1.png and fig2.png made of matplotlib in python code, now in some papers, I want to merge these two figs together, and give them (a) and (b) text labels in the top left area, respectively. How can I use matplotlib do it? I have the codes to generate fig1.png and fig2.png with fig1.py,fig2.py, is there any other convenient way ?like write another short code to do it?
How easily you can combine two figures depends on how complex they are. If they are both simple figures with single axes it is relatively straightforward.
Since you have the code to generate the two axes, I would start by putting that code into a single file and then modifying your code to plot to subplot axes.
You can layout multiple axes within a figure as follows:
from matplotlib.pyplot import plt
fig, ax = plt.subplots(1,2) #nrows, ncols
The variable ax is now a list of two axes onto which you can plot.
So for example, to plot to the left hand side you can use:
ax[0].plot([1,2,3,4,5]
To the right:
ax[1].plot([1,2,3,4,5]
You can also set labels on the subplots using .set_title(). A complete example:
ax[0].plot([1,2,3,4,5])
ax[0].set_title("A")
ax[1].plot([5,4,3,2,1])
ax[1].set_title("B")

how to improve keypoints detection and matching

I have been working a self project in image processing and robotics where instead robot as usual detecting colors and picking out the object, it tries to detect the holes(resembling different polygons) on the board. For a better understanding of the setup here is an image:
As you can see I have to detect these holes, find out their shapes and then use the robot to fit the object into the holes. I am using a kinect depth camera to get the depth image. The pic is shown below:
I was lost in thought of how to detect the holes with the camera, initially using masking to remove the background portion and some of the foreground portion based on the depth measurement,but this did not work out as, at different orientations of the camera the holes would merge with the board... something like inranging (it fully becomes white). Then I came across adaptiveThreshold function
adaptiveThreshold(depth1,depth3,255,ADAPTIVE_THRESH_GAUSSIAN_C,THRESH_BINARY,7,-1.0);
With noise removal using erode, dilate, and gaussian blur; which detected the holes in a better manner as shown in the picture below. Then I used the cvCanny edge detector to get the edges but so far it has not been good as shown in the picture below.After this I tried out various feature detectors from SIFT, SURF, ORB, GoodFeaturesToTrack and found out that ORB gave the best times and the features detected. After this I tried to get the relative camera pose of a query image by finding its keypoints and matching those keypoints for good matches to be given to the findHomography function. The results are as shown below as in the diagram:
In the end i want to get the relative camera pose between the two images and move the robot to that position using the rotational and translational vectors got from the solvePnP function.
So is there any other method by which I could improve the quality of the
holes detected for the keypoints detection and matching?
I had also tried contour detection and approxPolyDP but the approximated shapes are not really good:
I have tried tweaking the input parameters for the threshold and canny functions but
this is the best I can get
Also ,is my approach to get the camera pose correct?
UPDATE : No matter what I tried I could not get good repeatable features to map. Then I read online that a depth image is cheap in resolution and its only used for stuff like masking and getting the distances. So , it hit me that the features are not proper because of the low resolution image with its messy edges. So I thought of detecting features on a RGB image and using the depth image to get only the distances of those features. The quality of features I got were literally off the charts.It even detected the screws on the board!! Here are the keypoints detected using GoodFeaturesToTrack keypoint detection..
I met an another hurdle while getting the distancewith the distances of the points not coming out properly. I searched for possible causes and it occured to me after quite a while that there was a offset in the RGB and depth images because of the offset between the cameras.You can see this from the first two images. I then searched the net on how to compensate this offset but could not find a working solution.
If anyone one of you could help me in compensate the offset,it would be great!
UPDATE: I could not make good use of the goodFeaturesToTrack function. The function gives the corners in Point2f type .If you want to compute the descriptors we need the keypoints and converting Point2f to Keypoint with the code snippet below leads to the loss of scale and rotational invariance.
for( size_t i = 0; i < corners1.size(); i++ )
{
keypoints_1.push_back(KeyPoint(corners1[i], 1.f));
}
The hideous result from the feature matching is shown below .
I have to start on different feature matchings now.I'll post further updates. It would be really helpful if anyone could help in removing the offset problem.
Compensating the difference between image output and the world coordinates:
You should use good old camera calibration approach for calibrating the camera response and possibly generating a correction matrix for the camera output (in order to convert them into real scales).
It's not that complicated once you have printed out a checkerboard template and capture various shots. (For this application you don't need to worry about rotation invariance. Just calibrate the world view with the image array.)
You can find more information here: http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/own_calib.html
--
Now since I can't seem to comment on the question, I'd like to ask if your specific application requires the machine to "find out" the shape of the hole on the fly. If there are finite amount of hole shapes, you may then model them mathematically and look for the pixels that support the predefined models on the B/W edge image.
Such as (x)^2+(y)^2-r^2=0 for a circle with radius r, whereas x and y are the pixel coordinates.
That being said, I believe more clarification is needed regarding the requirements of the application (shape detection).
If you're going to detect specific shapes such as the ones in your provided image, then you're better off using a classifer. Delve into Haar classifiers, or better still, look into Bag of Words.
Using BoW, you'll need to train a bunch of datasets, consisting of positive and negative samples. Positive samples will contain N unique samples of each shape you want to detect. It's better if N would be > 10, best if >100 and highly variant and unique, for good robust classifier training.
Negative samples would (obviously), contain stuff that do not represent your shapes in any way. It's just for checking the accuracy of the classifier.
Also, once you have your classifier trained, you could distribute your classifier data (say, suppose you use SVM).
Here are some links to get you started with Bag of Words:
https://gilscvblog.wordpress.com/2013/08/23/bag-of-words-models-for-visual-categorization/
Sample code:
http://answers.opencv.org/question/43237/pyopencv_from-and-pyopencv_to-for-keypoint-class/

Camera calibration patterns

I would like to know if there is a process to generate camera calibration patterns.
We can use paint or any other graphic tool and set the precise measurements but then we need to hard-code the point positions or create a txt/xml file.
Is there a software that exports the data to a file that we can upload in our software.
What about 3D targets like boxes and/or cubes. Is there a method to generate the correct data points?
Cheers.
For 2D targets such as checkerboards, I used to do it like user469049 describes. Which was quite time consuming. In the end I gave up and created a web tool that does all of the leg work:
https://calib.io/pages/camera-calibration-pattern-generator
I'm using inkscape:
http://dominoc925.blogspot.co.uk/2012/06/create-camera-calibration-chess-board.html
I usually create a pdf file used to print and save files as LaTeX with PSTricks extensions.
The tex file has paths, so for a square it has a \moveto command to set the starting point and it has \line to command to set the next points.
In the dominoc925 example they define black and white squares but I just define the black squares to avoid repeated points.
I have a simple file loader in my code to get the points, just search for the \moveto and \line commands and workout the points from there.
For the 3D targets I treat each patter as one view because I don't have the tools to build a precise 3D target.
So instead of having different views of one patter like in the Matlab toolbox, I treat each detected pattern as a view.
In other words, if you have a 3D object then the target on each face is treated as a independent view.
There is probably a more professional way to do the job but this is my process :)
I hope this helps.

Resources