honestly, browsed everywhere I could, but as you might have guessed, failed to find solution to my issue.
So I need very simple shape to show moving object. By design, it is just moving arrow. Simple, right?
The catch is that I also need to indicate it's movement speed. By design, it has to be trailing arrows. So something like ">" - standing, ">>" - slowly moving, ">>>" - moving at normal speed, ">>>>" - moving fast.
Here's my current sketch (it should have 3 sub arrow's in the end).
So the issue is that i want to make all this states as separate animations, so i can export it as single model and just switch animations as state changes.
But i can't figure out how to toggle their visibility.
It seems like there is no way to change state of the child if it is not an armature. I found one trick with vertex groups and masks, but it doesn't work(or maybe i can't figure it out) for miltiple levels, like in my case, when I need each animation to show different sets of subarrows.
So that's basically the question - how can I : a) animate structure like:
arrow
|- speed1
|- speed2
|- speed3
(they all planes);
b) control visibility of more than 2 vertex groups;
c) maybe this is just wrong approach and there is proper way to do this.
Thanks for your answers!
There’s not really a good way to change the topology of a single mesh in an animation; the usual way this is done is by taking the intended-to-be-hidden parts and scaling them down and/or moving them inside other parts of the mesh. If you can specify keyframes in at least as high a frame rate as your animations will be played back, you can make the appearance/disappearance of the pieces of the mesh “instantaneous” by making the keyframes adjacent.
Related
I'm working on a 3rd person game built on Three.js where the user has free orbital control of the camera.
To enhance player experience, I'd like to use an outline on the active character that the user is controlling (there can be more characters in the scene), but only show the outline when (a part) of the player model is occluded/behind something.
Example:
The effect I want to create is shown on the far right (Stencil: 1), but instead of an opaque effect, I just want the outline as shown on the other models. There should be no outline on the visible parts of the player character, but only when the character is either partially behind another object or completely.
Now, the outline effect itself isn't really a problem. There are more than enough resources/examples/tutorials/whatnot about that. The part where I'm stuck is how to combine something like this with the occluding part. There is also the part about performance. If at all possible, I would really like to avoid rendering my entire scene multiple times for performance reasons.
Thanks in advance!
Using threejs R88 / 89-dev.
I build a complex BufferGeometry from scratch and I am looking for a way to selectively hide or show sections of it - (I don't want to have to save the positions elsewhere so moving vertices around is not an option)
After reading the code a bit, I wondered if I could use BufferGeometry.groups except that I don't want/need a different material per group - maybe assigning a different material could work and I could make the opacity 0 for parts I don't want? However, here will be many thousands of groups though so I doubt that would be effective.
Started a JSFiddle with a boiled down example: http://jsfiddle.net/callum/x7y72k1e/
Is there a way to do it efficiently?
(As I'm typing this, I wonder if I just store the positions in a mirror attribute array and use that).
Hy!
I am working with huge vertice objects, I am able to show lots of modells, because I have split them into smaller parts(Under 65K vertices). Also I am using three js cameras. I want to increase the performance by using a priority queue, and when the user moving the camera show only the top 10, then when the moving stop show the rest. This part is not that hard, but I dont want to put modells to render, when they are behind another object, maybe send out some Rays from the view of the camera(checking the bounding box hit) and according hit list i can build the prior queue.
What do you think?
Also how can I detect if I can load the next modell or not.(on the fly)
Option A: Occlusion culling, you will need to find a library for this.
Option B: Use a AABB Plane test with camera Frustum planes and object bounding box, this will tell you if an object is in cameras field of view. (not necessarily visible behind object, as such a operation is impossible, this mostly likely already done to a degree with webgl)
Implementation:
Google it, three js probably supports this
Option C: Use a max object render Limit, prioritized based on distance from camera and size of object. Eg Calculate which objects are visible(Option B), then prioritize the closest and biggest ones and disable the rest.
pseudo-code:
if(object is in frustum ){
var priority = (bounding.max - bounding.min) / distanceToCamera
}
Make sure your shaders are only doing one pass. As that will double the calculation time(roughly depending on situation)
Option D: raycast to eight corners of bounding box if they all fail don't render
the object. This is pretty accurate but by no means perfect.
Option A will be the best for sure, Using Option C is great if you don't care that small objects far away don't get rendered. Option D works well with objects that have a lot of verts, you may want to raycast more points of the object depending on the situation. Option B probably won't be useful for your scenario, but its a part of c, and other optimization methods. Over all there has never been an extremely reliable and optimal way to tell if something is behind something else.
I am in the process of learning how to create a lens flare application. I've got most of the basic components figured out and now I'm moving on to the more complicated ones such as the glimmers / glints / spikeball as seen here: http://wiki.nuaj.net/images/e/e1/OpticalFlaresLensObjects.png
Or these: http://ak3.picdn.net/shutterstock/videos/1996229/preview/stock-footage-blue-flare-rotate.jpg
Some have suggested creating particles that emanate outwards from the center while fading out and either increasing or decreasing in size but I've tried this and there are just too many nested loops which makes performance awful.
Someone else suggested drawing a circular gradient from center white to radius black and using some algorithms to lighten and darken areas thus producing rays.
Does anyone have any ideas? I'm really stuck on this one.
I am using a limited compiler that is similar to C but I don't have any access to antialiasing, predefined shapes, etc. Everything has to be hand-coded.
Any help would be greatly appreciated!
I would create large circle selections, then use a radial gradient. Each side of the gradient is white, but one side has 100% alpha and the other 0%. Once you have used the gradient tool to draw that gradient inside the circle. Deselect it and use the transform tool to Skew or in a sense smash it. Then duplicate it several times and turn each one creating a spiral or circle holding Ctrl to constrain when needed. Then once those several layers are in the rotation or design that you want. Group them in a folder and then you can further effect them all at once with another transform or skew. WHen you use these real smal, they are like little stars. But you can do many different things when creating each one to make them different. Like making each one lower in opacity than the last etc...
I found a few examples of how to do lens-flare 'via code'. Ideally you'd want to do this as a post-process - meaning after you're done with your regular render, you process the image further.
Fragment shaders are apt for this step. The easiest version I found is this one. The basic idea is to
Identify really bright spots in your image and potentially down sample it.
Shoot rays from the fragment to the center of the image and sample some pixels along the way.
Accumalate the samples and apply further processing - chromatic distortion etc - on it.
And you get a whole range of options to play with.
Another more common alternative seems to be
Have a set of basic images (circles, hexes) and render them as a bunch of bright objects, along the path from the camera to the light(s).
Composite this image on top of the regular render of you scene.
The problem is in determining when to turn on lens flare, since it is dependant on whether a light is visible/occluded from a camera. GPU Gems comes to rescue, with better options.
A more serious, physically based implementation is listed in this paper. This is a real-time version of making lens-flares, but you need a hardware that can support both vertex and geometry shaders.
I have a hierarchical animated model in DirectX which loads and animates based on the following DirectX sample: http://msdn.microsoft.com/en-us/library/ee418677%28VS.85%29.aspx
As good as the sample is it does not really go into some of the details of animation that I'd like. For example, if I have a mesh which has a running animation and a throwing animation as seperate animation sets how can I get the throwing animation to occur for bones above the hip and the walking animation to occur for bones underneath the hip?
Also if I wanted to for example have the person lean left or right would I simply have to find the bone for the hip and multiplay a rotation matrix by its matrix? In this case I think the matrix is m_amxBoneOffsets?
Composing multiple animations to a single one is usually the job of an animation system, something that is way out of scope of the D3D sample.
Let's look at your 2 examples:
running and throwing
Well, in this case you could apply the animation for the lower part of the body from the running animation and the animation for the upper part of the body from the throwing animation. And you'd get a very crappy result.
The how is just a matter of knowing which bones are where in the bone palette (something that depends on how they are stored, and in which order, but nothing inherently hard. The definite reference should be the documentation of the tool generating the animation data)
In practice, you're better off with a blending of the 2 animation. This is, in general, is hard, and software packages exist out there that do this for you. Gamebryo, e.g.
Or, an animation of a running guy who throws is different enough from a standing guy who throws that you might be better off having 2 animations.
Leaning
If you apply a rotation matrix to the root bone, you'll simply rotate your whole character.
Now if you rotate the next bone in the hierarchy (from the spine), you'll get all the bones that depend on it to rotate likewise. It will probably do what you want, but there's a sure way to find out. Try it!
Well the thing is the running animation SHOULD affect the throwing animation slightly. What you need to look into is animation blending.
I'm sure Valve wrote a good paper on how they implemented it in Counter-strike many years ago. Its not on the valve site though so I'm not sure where I got this memory from ...