I'm trying to create a spline curve programmatically in a dxf file. I need to use fit points as the curve needs to pass through the specified points. I understand I also need to use control points. Is there a formula to calculate what these should be? It is a closed spline with four fit points.
Thanks in advance!
I think this is not an easy task. In addition to the control points, you will also need to determine the knots. There is a DXF reader/viewer here (written in C++) which claims to support spline. May be you can find some information by reading the code.
AutoCAD uses NURBS which are approximated curves (the curve pass only by the first and last points). In the user interface, splines are interpolated (the curve pass by the fit points), so there is a translation which is done when reading/writing a DXF file. If you create a closed point with 4 fit points, you will see there is 7 controls points in the DXF file:
Using a polyline to approximate your spline will be easier. Here is a sample of a polyline (L shape starting from 0,0 -> 100, 0 -> 100, 50)
0
LWPOLYLINE
5
D5
330
70
100
AcDbEntity
8
0
100
AcDbPolyline
90
3
70
0
43
0.0
10
0.0
20
0.0
10
100.0
20
0.0
10
100.0
20
50.0
To compute the position of the control points from the fit points, you can consult this page (§24 & §25). In fact you need to reverse the Casteljau's algorithm (for Bezier curves; I don't know how it works for NURBS).
While I appreciate this is an old question I thought I'd share my experience. I have found that you can write a spline to a DXF file using only fit points and no control points. I've only done this with open splines, and it might (or probably does) vary with version.
SECTION
2
ENTITIES
0
SPLINE
8
Outline
100
AcDbSpline
70
1032
71
3
72
0
73
0
74
6
44
0.000000001
11
33.98654201387437
21
0.0
31
0.0
11
35.68732510673189
21
0.36908328878159574
31
0.0
11
37.37659045005916
21
1.0707740721032477
31
0.0
11
39.04265824154412
21
2.0149195037916585
31
0.0
11
40.67371568762629
21
3.1732042281057
31
0.0
11
42.25786591112497
21
4.5302062466715505
31
0.0
Group code 70 bit value 1024 allows for fitting to points. I found this little nugget of information on an AutoCAD forum post. I haven't come across it referenced any where else. A bit value of 1 is Closed spline, and 8 is Planar. My value of 1032 is obviously planar, fitting to points and not closed.
Group code 74 is the number of fit points.
Group code 44 is the fit point tolerance.
Group codes 11, 21, 31 are the x, y, z coordinates of the fit points.
See reference manual.
Related
I currently display 115 (!) different sponsor icons at the bottom of many web pages on my website. They're lazy-loaded, but even so, that's quite a lot.
At present, these icons are loaded separately, and are sized 75x50 (or x2 or x3, depending on the screen of the device).
I'm toying with the idea of making them all into one sprite, rather than 115 separate files. That would mean, instead of lots of tiny little files, I'd have one large PNG or WEBP file instead. The way I'm considering doing it would mean the smallest file would be 8,625 pixels across; and the x3 version would be 25,875 pixels across, which seems like a really very large image (albeit only 225 px high).
Will an image of this pixel size cause a browser to choke?
Is a sprite the right way to achieve a faster-loading page here, or is there something else I should be considering?
115 icons with 75 pixel wide sure will calculate to very wide 8625 pixels image, which is only 50px heigh...
but you don't have to use a low height (50 pixel) very wide (8625 pixel) image.
you can make a suitable rectangular smart size image with grid of icons... say, 12 rows of 10 icons per line...
115 x 10 = 1150 + 50 pixel (5 pixel space between 10 icons) total 1200 pixel wide approx.
50 x 12 = 600 + 120 pixel (5 pixel space between 12 icons) total 720 pixel tall approx.
I have read the original Viola Jones article, the Wikipedia article, the openCV manual and these SO answers:
How does the Viola-Jones face detection method work?
Defining an (initial) set of Haar Like Features
I am trying to implement my own version of the two detectors in the original article (the Adaboost-200-features version and the final cascade version), but something is missing in my understanding, specifically with how the 24 x 24 detector works on the entire image and (maybe) its sub-images on (maybe) different scales. To my understanding, during detection:
(1) The image integral is computed twice, for image variance normalization, once as is, once squared:
The variance of an image sub-window can be computed quickly using a
pair of integral images.
(2) The 24 x 24 square detector is moved across the normalized image in steps of 1 pixel, deciding for each square whether it is a face (or a different object) or not:
The final detector is scanned across the image at multiple scales and
locations.
(3) Then the image is scaled to be 1.25 smaller, and we go back to (1)
This is done 12 times until the smaller side of the image rectangle is 24 pixels long (288 in original image divided by (1.25 ^ (12 - 1)) is just over 24):
a 384 by 288 pixel image is scanned at 12 scales each a factor of 1.25
larger than the last.
But then I see this quote in the article:
Scaling is achieved by scaling the detector itself, rather
than scaling the image. This process makes sense because the features
can be evaluated at any scale with the same cost. Good detection
results were obtained using scales which are a factor of
1.25 apart.
And I see this quote in the Wikipedia article:
Instead of scaling the image itself (e.g. pyramid-filters), we scale the features.
And I'm thrown off understanding what exactly is going on. How can the 24 x 24 detector be scaled? The entire set of features calculated on this area (whether in the 200-features version or the ~6K-cascade version) is based on originally exploring the 162K possible features of this 24 x 24 rectangle. And why did I get that the pyramid-paradigm still holds for this algorithm?
What should change in the above algorithm? Is it the image which is scaled or the 24 x 24 detector, and if it is the detector how is it exactly done?
We are thinking in tex.stackexchange of the thread Mouse Control of 360 Video in Beamer how to store 3D data in a picture. Stepping size is 10 degrees. In matrix, it means 36x18 projections (=648 individual image files). All of them could be stored in a single image. However, we are not sure how. I can do 2D panorama viewer. I am trying to do the 3D viewer such that the result has a manageable size (so 10 degree step size).
Proposal #1
ASCII example of equispaced font where 9 (=18/2) panorama pictures of the hemisphere on top of each other for each angle; everything can be in one picture
1 2 3 4 ... 34 35 36
___2D-panorama #1__ 1
___2D-panorama #2__ 2
________etc________ 3
___________________ 4
___________________ ...
___________________ 8
___2D-panorama_ #9_ 9
How to Store 3D panorama data in Image?
I have a jsD3 area graphic, data contains daily values of some metric, for many days I have the same values, so my graph have peaks at certain points when the values goes up or down, and horizontal line for the dates the value is the same. How to highlight with circles only the "peaks" (when data is different from previous days, increase or decrease)?
If I render circles for same data, they repeat one close to the other for the dates with same values.
For example consider this sample: http://bl.ocks.org/mbostock/3883195 but using this data:
date close
14-May-14 10
15-May-14 10
16-May-14 12
17-May-14 12
18-May-14 10
19-May-14 10
20-May-14 10
21-May-14 10
22-May-14 28
24-May-14 39
25-May-14 39
26-May-14 49
27-May-14 49
28-May-14 59
28-May-14 48
30-May-14 49
This is the rendered chart:
I would like to get highlight the value changes only so the resulting chart is like this:
This is pretty late I guess but I've recently been doing research into similar things:
d3.select('#chart1 svg').selectAll("circle.nv-point")
.data(testdata[0].values)
.filter(function(d) {return d.y > 20000; })
.style("fill-opacity",1);
The above is what I've been using so far, it highlights all the points whose values are greater than 20000 (of course this would be different for yours) and makes the points visible.
Hopefully this helps some, I'm new to d3 and have been trying to find answers to a similar problem as yours.
When I am reading about the resolution of a digital image from the following link http://www.rideau-info.com/photos/whatis.html, I confused at the following Paragraph:
If the field of view is 20 feet across, a 3 megapixel camera will be resolving that view at 102 pixels per foot. If that same shot was taken with an 18 Mp camera it would be resolving that view at 259 pixels per foot, 2.5 times more resolution than a 3 Mp camera.
Here, how come the author is arriving at the conclusion: "102 pixels per foot and 259 pixels"?
A 3MP camera, in that article, is 2048 wide x 1536 high. Think of 2048 pixels across as 2048 boxes laid in a straight line. Now, if you were to divide these equally amongst 20 sections (20 feet of field of view), you would get ~120 boxes per section. Hence, the logic behind 102 pixels per foot. Similar reasoning is used for the 18MP camera which is 5184 W x 3546 H. 5184 divided into 20 is ~259.