In a UWP application, I want to animate paths (e.g. composed from Bézier segments) as if it was drawn with a pen, i.e. in the example
<Path x:Name="path" Stroke="Black" Data="M 100,200 C 100,25 400,350 400,175" />
it should start incrementally drawing a Bézier segment from (100, 200) to (400, 175) using the given control points. TBH, I am completely lost as I have not found a property on Path that could be used as control variable. I first thought I could build something using PointAnimationUsingPath, but this is not available in UWP. Is there any way to achieve this, preferably without evaluating the Bézier curve manually in key frames?
I would suggest using Win2D with the CanvasAnimatedControl.
Related
As far as I'm aware, there isn't any particular way of doing depth in SVG. Elements are drawn in the order they appear, so if something needs to go behind something else, it must be earlier in the document.
I'm trying to create an animation, which for simplicity let's say it's the Earth orbiting the Sun. Doing this from above would be easy enough, just follow a circular path. However I want to have it viewed from a quite shallow angle, so the Earth would be going behind the Sun.
My current idea is to have two Earths, one before and one after the Sun. Something like...
<use xlink:href="#earth" />
<use xlink:href="#sun" />
<use xlink:href="#earth" />
Then I animate the two Earths, one going right-to-left and the other left-to-right, and alternating their visibility to only appear half the time. With the appropriate timing, this results in a single Earth being seen to orbit the Sun.
Honestly I quite like this idea, but I'm just wondering if anyone knows of a better way to do this kind of "3D-like" thing.
Why have two of the complicated things? It would be much easier to have two suns. Animate your earth on a continuous elliptical path. Then switch between your two suns at the right times.
You can create two sibling <g> elements (call them topLayer / bottomLayer or alike), and then move your planet elements from one layer to another without duplicating them
I am creating a tweening web map that switches between different configurations of borders within a map. I'm using d3.JS to do this - here's an example of where I'm going http://jsfiddle.net/cormundo/91pjd3z4/40/. I'm planning on adding some other static sections of svg as well as tweening segments.
I've created a few configuration of Line data (not polygons, as I'm thinking this might be easier to work with) within Arcmap, and I've exported those to SVG's. When I looked at the SVG's closely I realized that they consist of a lot of different segments, each of which is contained within a different path element in the svg. This of course makes working with these in d3.js quite messy, so I'm trying to combine as many of them into single segments as possible.
Example segment ->
<path xmlns="http://www.w3.org/2000/svg" clip-path="url(#SVG_CP_1)" fill="none" stroke="#A16600" stroke-width="0.95997" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round" d=" M236.87355,455.7626L238.79349,455.7626"/>
I've tried dissolving the attribute tables together in each layer in arcmap - the multitude of line segments are still there. I moved this over to illustrator, and within illustrator I'm trying to use the "join path" tool to combine the different segments into one larger line. I'm generally getting two types of errors there I'm not understanding -
[With just a few lines]: "To join, you must select two different endpoints of the same or two different paths"
or
[If I select a whole layer of lines] "The selected objects cannot be joined as they are invalid objects (compound paths, closed paths, text objects, graphs, live paint group). You can use join command to connect two or more paths, paths in groups; or to close an open path."
So, what I am asking is:
1) Is there an easy fix for this in illustrator or ArcMap?
or
2) Is there a better way of doing this?
Thanks!
Figured it out for anyone with the same problem! ArcGIS exports really messy lines, you've got to go through in AI or inkscape and join them together, one by one... very time consuming but it worked!
I am showing a SVG map with the coastline drawn with a blurry effect as shown on this image:
I am using a simple feGaussianBlur filter to draw the coastline below the land polygons:
<filter id="blur">
<feGaussianBlur in="SourceGraphic" stdDeviation="4">
</feGaussianBlur></filter>
The result is satisfying on the north coast. However, some rectangular patterns appear in the red circle. This is due to the segmentation of the coast into several linear elements, whose blurry margins intersect.
Is there a way to fix this and have a 'nice' blurry effect everywhere?
I already tried color-interpolation-filters=sRGB and image-rendering=optimizeQuality without any success.
FYI, the demo map is here with the source code.
Somehow I think this because of the filter dimensions, the extended parts are cut. Try extending these boundaries:
<filter id="degenCodeNeon" x="-50%" y="-50%" width="200%" height="200%">
The percentage uses objectBoundingBox, you could have also specified userSpaceOnUse. But first try this one.
Tried with x,y -250% and width/height 600%, seems to work. I suggest adding color matrix or component transfer filters as addition to completely reduce alpha to 0 below a certain threshold.
How can I change transform the coordinates in a window from 0,0 topleft to 0,0 bottomleft.
I have tried various solutions with
SetMapMode(hdc,MM_TEXT);,
SetViewportExtEx(hdc,0,-clientrect.bottom,NULL);
SetViewPortOrgEx(hdc,0,-clientrect.bottom,NULL);
SetWindowOrgEx(hdc,0,-clientrect.bottom,NULL);
SetWindowExtEx(hdc,0,-clientrect.bottom,NULL);
I have even tried google for a solution but to no prevail, so I turn to you the more experienced people on the internet.
The idea is I'm creating a custom control for linear interpolation and I could reverse the coordinate system by x,y in top right corner but I want it right. At the moment I get a reversed linear interpolation when I try to draw it as I cannot get the coords to be bottomleft.
I'm using win32 api, and I suspect I can skip the code as the screen coordinate system is almost identical on all systems, by that I mean 0,0 is "always" topleft on the screen if you are keeping it to standard 2d window and frames.
I really don't want a whole codesample to ease the typing pain for you guys, but I want some direction as it seems I cannot grasp the simple concept of flipping the coords in win32 api.
Thanks and a merry christmas
EDIT !
I would like to add my own answer to this question as I used simple math to reverse the view so to say.
If for an example I got the valuepair x,y (150,57) and another pair x,y (100,75) then I used this formulae height + (-1 * y) and voila I get a proper cartesian coordinate field :) ofcourse in this example height is undefined variable but in my application its 200px in height.
According to the documentation for SetViewportOrgEx, you generally want to use it or SetWindowOrgEx, but not both. That said, you probably want the viewport origin to be (0, clientrect.bottom), not -clientrect.bottom.
Setting transforms with GDI always made me crazy. I think you're better off using GDI+. With it, you can create a matrix that describes a translation of (0, clientRect.bottom), and a scaling of (1.0, -1.0). Then you can call SetWorldTransform.
See the example at Using Coordinate Spaces and Transformations. For general information about transforms: Coordinate Spaces and Transformations.
Additional information:
I've not tried this with direct Windows API calls, but if I do the following in C# using the Graphics class (which is a wrapper around GDI+), it works:
Graphics g = GetGraphics(); // gets a canvas to draw on
SetTranslateTransform(0, clientRect.Bottom);
SetScaleTransform(1.0f, -1.0f);
That puts the origin at the bottom left, with x increasing to the right and y increasing as you go up. If you use SetWorldTransform as I suggested, the above will work for you.
If you have to use GDI, then you'll want to use SetViewportOrgEx(0, clientRect.bottom), and then set the scaling. I don't remember how to do scaling with the old GDI functions.
Note also that the documentation for SetViewportExtEx says:
When the following mapping modes are set, calls to the SetWindowExtEx
and SetViewportExtEx functions are ignored.
MM_HIENGLISH
MM_HIMETRIC
MM_LOENGLISH
MM_LOMETRIC
MM_TEXT
MM_TWIPS
I'm searching for an certain object in my photograph:
Object: Outline of a rectangle with an X in the middle. It looks like a rectangular checkbox. That's all. So, no fill, just lines. The rectangle will have the same ratios of length to width but it could be any size or any rotation in the photograph.
I've looked a whole bunch of image recognition approaches. But I'm trying to determine the best for this specific task. Most importantly, the object is made of lines and is not a filled shape. Also, there is no perspective distortion, so the rectangular object will always have right angles in the photograph.
Any ideas? I'm hoping for something that I can implement fairly easily.
Thanks all.
You could try using a corner detector (e.g. Harris) to find the corners of the box, the ends and the intersection of the X. That simplifies the problem to finding points in the right configuration.
Edit (response to comment):
I'm assuming you can find the corner points in your image, the 4 corners of the rectangle, the 4 line endings of the X and the center of the X, plus a few other corners in the image due to noise or objects in the background. That simplifies the problem to finding a set of 9 points in the right configuration, out of a given set of points.
My first try would be to look at each corner point A. Then I'd iterate over the points B close to A. Now if I assume that (e.g.) A is the upper left corner of the rectangle and B is the lower right corner, I can easily calculate, where I would expect the other corner points to be in the image. I'd use some nearest-neighbor search (or a library like FLANN) to see if there are corners where I'd expect them. If I can find a set of points that matches these expected positions, I know where the symbol would be, if it is present in the image.
You have to try if that is good enough for your application. If you have too many false positives (sets of corners of other objects that accidentially form a rectangle + X), you could check if there are lines (i.e. high contrast in the right direction) where you would expect them. And you could check if there is low contrast where there are no lines in the pattern. This should be relatively straightforward once you know the points in the image that correspond to the corners/line endings in the object you're looking for.
I'd suggest the Generalized Hough Transform. It seems you have a fairly simple, fixed shape. The generalized Hough transform should be able to detect that shape at any rotation or scale in the image. You many need to threshold the original image, or pre-process it in some way for this method to be useful though.
You can use local features to identify the object in image. Feature detection wiki
For example, you can calculate features on some referent image which contains only the object you're looking for and save the results, let's say, to a plain text file. After that you can search for the object just by comparing newly calculated features (on images with some complex scenes containing the object) with the referent ones.
Here's some good resource on local features:
Local Invariant Feature Detectors: A Survey