What is the method to draw diagrams in autocad with variable dimensions such that the diagram will change depending on the value of the variable.
Sounds like you're talking about parametric dimensions.
Here's an article from Autodesk that might get you started.
Related
https://learn.microsoft.com/en-us/windows/desktop/direct2d/direct2d-transforms-overview seems to be clear in that "Direct2D supports only affine (linear) transformations"
But what if I have a need to transform an image to some arbitrary points what are my options in 2019? I note this has been asked before Mapping corners to arbitrary positions using Direct2D but that was in 2012 and I am wondering if there is any current option?
I had naively assumed that if I had a projective transform matrix (from cv::getPerspectiveTransform for instance) then things would work. Guess it pays to RTFM before diving into using Direct2D.
You can probably use effects to achieve that, for example CLSID_D2D13DPerspectiveTransform or CLSID_D2D13DTransform. I believe it acts as post processing, so you prepare your image, set it as input, and draw with selected effect.
I'm reading up on Direct2D before I migrate my GDI code to it, and I'm trying to figure out how paths work. I understand most of the work involved with geometries and geometry sinks, but there's one thing I don't understand: the D2D1_FIGURE_BEGIN type and its parameter to BeginFigure().
First, why is this value even needed? Why does a geometry need to know if it's filled or hollow ahead of time? I don't know nay other drawing API which cares about whether path objects are filled or not ahead of time; you just define the endpoints of the shapes and then call fill() or stroke() to draw your path, so how are geometries any different?
And if this parameter is necessary, how does choosing one value over the other affect the shapes I draw in?
Finally, if I understand the usage of this enumeration correctly, you're supposed to only use filled paths with FillGeometry() and hollow paths with DrawGeometry(). However, the hourglass example here and cited by several method documentation pages (like the BeginFigure() one) creates a filled figure and draws it with both DrawGeometry() and FillGeometry()! Is this undefined behavior? Does it have anything to do with the blue border around the gradient in the example picture, which I don't see anywhere in the code?
Thanks.
EDIT Okay I think I understand what's going on with the gradient's weird outline: the gradient is also transitioning alpha values, and the fill is overlapping the stroke because the stroke is centered on the line, and the fill is drawn after the stroke. That still doesn't explain why I can fill and stroke with a filled geometry, or what the difference between hollow and filled geometries are...
Also I just realized that hollow geometries are documented as not having bounds. Does this mean that hollow geometries are purely an optimization for stroke-only geometries and otherwise behave identically to a filled geometry?
If you want to better understand Direct2D's geometry system, I recommend studying the WPF geometry system. WPF, XPS, Direct2D, Silverlight, and the newer "XAML" frameworks all use the same building blocks (the same "language", if you will). I found it easier to understand the declarative object-oriented API in WPF, and after that it was a breeze to work with the imperative API in Direct2D. You can think of WPF's mutable geometry system as an implementation of the "builder" pattern from Java, where the build() method is behind the scenes (hidden from you) and spits out an immutable Direct2D geometry when it comes time to render things on-screen (WPF uses something called "MIL", which IIRC/AFAICT, Direct2D was forked from. They really are the same thing!) It is also straightforward to write code that converts between the two representations, e.g. walking a WPF PathGeometry and streaming it into a Direct2D geometry sink, and you can also use ID2D1PathGeometry::Stream and a custom ID2D1GeometrySink implementation to reconstitute a WPF PathGeometry.
(BTW this is not theoretical :) It's exactly what I do in Paint.NET 4.0+: I use a WPF-esque declarative, mutable object model that spits out immutable Direct2D geometries at render time. It works really well!)
Okay, anyway, to get directly to your specific question: BeginFigure() and D2D1_FIGURE_BEGIN map directly to the PathFigure.IsFilled property in WPF. In order to get an intuitive understanding of what effect this has, you can use something like KaXAML to play around with some geometries from WPF or Silverlight samples and see what the results look like. And the documentation is definitely better for WPF and Silverlight than for Direct2D.
Another key concept is that DrawGeometry is basically a helper method. You can accomplish the same thing by first widening your geometry with ID2D1Geometry::Widen and then using FillGeometry ("widening" seems like a misnomer to me, btw: in Photoshop or Illustrator you'd probably use a verb like "stroke"). That's not to say that either one always performs better/worse ... be sure to benchmark. I've seen it go both ways. The reason you can think of this as a helper method is dependent on the fact that the lowest level of the rasterization engine can only do one thing: fill a triangle. All other drawing "primitives" must be converted to triangle lists or strips (this is also why ID2D1Mesh is so fast: it bypasses all sorts of processing code!). Filling a geometry requires tessellation of its interior to a list of triangle strips which can then be filled by Direct3D. "Drawing" a geometry requires applying a stroke (width and/or style): even a simple 1-pixel wide straight line must be first converted to 2 filled triangles.
Oh, also, if you want to compute the "real" bounds of a geometry with hollow figures, use ID2D1Geometry::GetWidenedBounds with a strokeWidth of zero. This is a discrepancy between Direct2D and WPF that puzzles me. Geometry.Bounds (in WPF) is equivalent to ID2D1Geometry::GetWidenedBounds(0.0f).
I've been making a game in scene kit, but the edges of objects are difficult to see, making some of the games details impossible to see. Is there a way to make a black outline around all the game objects?
you could use an SCNTechnique as mentioned in another answer (you can have a look at this article about cel shading, which has an edge-detection pass) but full-frame post-processes are quite expensive.
On OS X you can also leverage geometry shaders (see this article). But it's not available on iOS and might be harder to implement and get right.
I would go with a much easier technique, which only involves vertex and fragment shaders. You can take a look at this article, which gives an example that's easy to re-create in SceneKit using SCNProgram or shader modifiers.
There is an example of making a glowing outline for nodes that uses SCNTechnique here:
https://github.com/laanlabs/SCNTechniqueGlow
You could modify the color and blur method to achieve an stroked outline effect.
Another SCNTechnique example, as referenced here: https://www.nurfacegames.com/everything-you-wanted-to-know-about-outline-shaders/, is to render your node slightly larger behind then again in front at regular size.
Here's a playground example of that: https://github.com/mackhowell/scenekit-outline-shader-scntechnique.
i'm just curious if someone knows how to analyse an image. for example:
i have a heatmap picture, know i want to extract the color value and the x,y coordiantes and redraw the image with javascript & canvas.
Another example would be to recognize pattern in the image (lines, arrows) and extract the direction and length.
A popular image/video analysis library I would recommend is OpenCV. I have used this github fork of the ruby-opencv gem with success. If you scroll down on the readme file, you'll see an example on face detection. The unit tests demonstrate how to do other things like drawing shapes and such. At a glance, I don't see any tests on extracting pixel data, but it most definitely is possible.
If you need something more simple, you can try out devil. It's more user-friendly and is focused on image manipulation, but you can probably extract pixel data with it.
It sounds like you'll be leaning towards OpenCV. It might be useful to look at this previous question, specifically the mention of the Hough transform
I'm messing around with image manipulation, mostly using Python. I'm not too worried about performance right now, as I'm just doing this for fun. Thus far, I can load bitmaps, merge them (according to some function), and do some REALLY crude analysis (find the brightest/darkest points, that kind of thing).
I'd like to be able to take an image, generate a set of control points (which I can more or less do now), and then smudge the image, starting at a control point and moving in a particular direction. What I'm not sure of is the process of smudging itself. What's a good algorithm for this?
This question is pretty old but I've recently gotten interested in this very subject so maybe this might be helpful to someone. I implemented a 'smudge' brush using Imagick for PHP which is roughly based on the smudging technique described in this paper. If you want to inspect the code feel free to have a look at the project: Magickpaint
Try PythonMagick (ImageMagick library bindings for Python). If you can't find it on your distribution's repositories, get it here: http://www.imagemagick.org/download/python/
It has more effect functions than you can shake a stick at.
One method would be to apply a Gaussian blur (or some other type of blur) to each point in the region defined by your control points.
One method would be to create a grid that your control points moves and then use texture mapping techniques to map the image back onto the distorted grid.
I can vouch for a Gaussian Blur mentioned above, it is quite simple to implement and provides a fairly decent blur result.
James