JavaFX8 Path drawing Performance - performance

i am looking for a way to draw alot of paths fast in JavaFx8, the thing is I want to animate the paths.
I tried the standard path, polyline, I also tried drawing in an graphic context/canvas. But everything is just too slow.
In a test I am rotating around 200 Rectangles 1px wide and this works perfect, if I look in the pulselogger output I see that everything gets renderd within 16ms. In comparison if I draw 200 straight lines with the path node (same visual Output as 1px Rectangles) and animate this, i am getting very bad results and the paint task in the pulselogger shows 200ms.
I guess the problem is that the path is not drawn via OpenGl instead it falls back to SW renderning.
I use alot of lineTo() in the paths I want to render, so a very unelegant way would be to draw only the round stuff with the Path node and place a rectangle over the parts where I usualy use lineTo(). I guess this would speed things up alot, but this is very unflexible and hackish.
Is there anything else I can do to get the Paths rendert fast?
I already used setSmooth(false), caching is not an option as the paths get animated.
Final plattform should be Linux now developing on Mac
Thanks Robi

Currently JavaFX does all its path rendering in software and not in hardware and there is not much you can do about this especially if caching is not an option for you. The only possibility I see for you is to have a look at the 3D stuff. Maybe you can use a triangle mesh for your purposes. That's something I always wanted to try but never had the time to do :-)

Related

Move drawing on window with Win32

I've drawn a lot of scattered pixels on a screen. I want to move them all a few pixels to the left. Doesn't matter what's left on the last columns of pixels – I can deal with them after the fact.
I tried using SetPixel to redraw the translated image, and though it worked, it wasn't as fast as I would have wanted. So I started searching the web for a specific function for my purpose. I came across SetWorldTransform, but from what I can gather, this serves only to translate coordinates, not full regions. I also found OffsetRgn, but again, this seems to have another purpose. I haven't found anything else that seems promising.
Is there anything I can do that's faster than the naive SetPixel?

Why does my object "sink into the ground"?

I have a rather simple react-three-fiber setup that includes cannon.js-powered physics. In the scene there is a cup -- which is modelled as a cylinder whose top radius is bigger than the bottom one -- that is placed on a surface.
When I run the code, during the loading screen everything looks fine. But when physics kick in, the cup suddenly "sinks" into the ground. Why is that? I can't make sense of that...
One theory of mine was that the "physics shape" of the cylinder is not identical with the "optical shape" that gets rendered, but even then the movement I observe still doesn't make sense with any reasonable bounding box I can imagine...
Working example: https://codesandbox.io/s/amazing-proskuriakova-4slpq
Physics are finicky and really hard to debug because you're often trying to intuit the effects of an invisible system by its effects on whatever hybrid view you have.
I notice if i bring the mass downn to a more reasonable value, like 5, the object appears to roll around like a sphere or some other shape.. so I think your theory is sound. I don't know off the top of my head what the solution is, but I do know that the only physics engine I "trust" in the js space, except for very simple simulations, is Ammo.js. It's hard to use, but is an emscripten port of a truly amazing AAA quality library. https://threejs.org/examples/?q=phys#physics_ammo_break
I would start by getting a cube and a sphere working.. once you have verified that those work as expected.. ideally using real-ish world scale units, like a 1x1x1 cube, with a mass of 1. Use a texture on the sphere so you know that its rolling like you expect. Once you have verified the simpler primitives work, move onto the more complex geometries.
Best way forward would be to make an issue on the use-cannon GH. That lib and cannon-es are under active maintenance now. Meanwhile, i believe convexpolyhydron can also do it flawlessly, see: https://codesandbox.io/s/r3f-convex-polyhedron-cnm0s

Rectangle detection in image

I'd like to program a detection of a rectangular sheet of paper which doesn't absolutely need to be perfectly straight on each side as I may take a picture of it "in the air" which means the single sides of the paper might get distorted a bit.
The app (iOs and android) CamScanner does this very very good and Im wondering how this might be implemented. First of all I thought of doing:
smoothing / noise reduction
Edge detection (canny etc) OR thresholding (global / adaptive)
Hough Transformation
Detecting lines (only vertically / horizontally allowed)
Calculate the intercept point of 4 found lines
But this gives me much problems with different types of images.
And I'm wondering if there's maybe a better approach in directly detecting a rectangular-like shape in an image and if so, if maybe camscanner does implement it like this as well!?
Here are some images taken in CamScanner.
These ones are detected quite nicely even though in a) the side is distorted (but the corner still gets shown in the overlay but doesnt really fit the corner of the white paper) and in b) the background is pretty close to the actual paper but it still gets recognized correctly:
It even gets the rotated pictures correctly:
And when Im inserting some testing errors, it fails but at least detects some of the contour, but always try to detect it as a rectangle:
And here it fails completely:
I suppose in the last three examples, if it would do hough transformation, it could have detected at least two of the four sides of the rectangle.
Any ideas and tips?
Thanks a lot in advance
OpenCV framework may help your problem. Also, you can look to this document for the Android platform.
The full source code is available on Github.

I need help drawing sunrays, glimmers, bursts, sparkles, etc in C

I am in the process of learning how to create a lens flare application. I've got most of the basic components figured out and now I'm moving on to the more complicated ones such as the glimmers / glints / spikeball as seen here: http://wiki.nuaj.net/images/e/e1/OpticalFlaresLensObjects.png
Or these: http://ak3.picdn.net/shutterstock/videos/1996229/preview/stock-footage-blue-flare-rotate.jpg
Some have suggested creating particles that emanate outwards from the center while fading out and either increasing or decreasing in size but I've tried this and there are just too many nested loops which makes performance awful.
Someone else suggested drawing a circular gradient from center white to radius black and using some algorithms to lighten and darken areas thus producing rays.
Does anyone have any ideas? I'm really stuck on this one.
I am using a limited compiler that is similar to C but I don't have any access to antialiasing, predefined shapes, etc. Everything has to be hand-coded.
Any help would be greatly appreciated!
I would create large circle selections, then use a radial gradient. Each side of the gradient is white, but one side has 100% alpha and the other 0%. Once you have used the gradient tool to draw that gradient inside the circle. Deselect it and use the transform tool to Skew or in a sense smash it. Then duplicate it several times and turn each one creating a spiral or circle holding Ctrl to constrain when needed. Then once those several layers are in the rotation or design that you want. Group them in a folder and then you can further effect them all at once with another transform or skew. WHen you use these real smal, they are like little stars. But you can do many different things when creating each one to make them different. Like making each one lower in opacity than the last etc...
I found a few examples of how to do lens-flare 'via code'. Ideally you'd want to do this as a post-process - meaning after you're done with your regular render, you process the image further.
Fragment shaders are apt for this step. The easiest version I found is this one. The basic idea is to
Identify really bright spots in your image and potentially down sample it.
Shoot rays from the fragment to the center of the image and sample some pixels along the way.
Accumalate the samples and apply further processing - chromatic distortion etc - on it.
And you get a whole range of options to play with.
Another more common alternative seems to be
Have a set of basic images (circles, hexes) and render them as a bunch of bright objects, along the path from the camera to the light(s).
Composite this image on top of the regular render of you scene.
The problem is in determining when to turn on lens flare, since it is dependant on whether a light is visible/occluded from a camera. GPU Gems comes to rescue, with better options.
A more serious, physically based implementation is listed in this paper. This is a real-time version of making lens-flares, but you need a hardware that can support both vertex and geometry shaders.

Hierarchical animations in DirectX and handling seperate animations on the same mesh?

I have a hierarchical animated model in DirectX which loads and animates based on the following DirectX sample: http://msdn.microsoft.com/en-us/library/ee418677%28VS.85%29.aspx
As good as the sample is it does not really go into some of the details of animation that I'd like. For example, if I have a mesh which has a running animation and a throwing animation as seperate animation sets how can I get the throwing animation to occur for bones above the hip and the walking animation to occur for bones underneath the hip?
Also if I wanted to for example have the person lean left or right would I simply have to find the bone for the hip and multiplay a rotation matrix by its matrix? In this case I think the matrix is m_amxBoneOffsets?
Composing multiple animations to a single one is usually the job of an animation system, something that is way out of scope of the D3D sample.
Let's look at your 2 examples:
running and throwing
Well, in this case you could apply the animation for the lower part of the body from the running animation and the animation for the upper part of the body from the throwing animation. And you'd get a very crappy result.
The how is just a matter of knowing which bones are where in the bone palette (something that depends on how they are stored, and in which order, but nothing inherently hard. The definite reference should be the documentation of the tool generating the animation data)
In practice, you're better off with a blending of the 2 animation. This is, in general, is hard, and software packages exist out there that do this for you. Gamebryo, e.g.
Or, an animation of a running guy who throws is different enough from a standing guy who throws that you might be better off having 2 animations.
Leaning
If you apply a rotation matrix to the root bone, you'll simply rotate your whole character.
Now if you rotate the next bone in the hierarchy (from the spine), you'll get all the bones that depend on it to rotate likewise. It will probably do what you want, but there's a sure way to find out. Try it!
Well the thing is the running animation SHOULD affect the throwing animation slightly. What you need to look into is animation blending.
I'm sure Valve wrote a good paper on how they implemented it in Counter-strike many years ago. Its not on the valve site though so I'm not sure where I got this memory from ...

Resources