I have created a Blender model that uses an armature with bone constraints to animate the model. After exporting the model as a .fbx file and passing it through fbx-conv, my animation is split up into several animations.
Each animation ends up with an ID similar to "MyObject|MyAnimation".
In other words, I need to run all of these sub-animations at once to run my full animation.
I tried several AnimationController methods. First I tried calling AnimationController.setAnimation() for each of the animations, which doesn't work because it cancels the current animation each time it is called.
The AnimationController.animate() method sounds like it is supposed to do what I want, but I just get the same result as with .setAnimation().
Here is the code I tried:
instance = new ModelInstance( myModel );
controller = new AnimationController( instance );
for( Animation animation : instance.animations ) {
controller.animate( animation.id, 0 );
}
Is this not how .animate() is intended to work?
Also, I am not entirely certain of how to correctly use the second argument, transitionTime. Could that be the problem?
As Xoppa pointed out, the LibGDX docs on 3D Animations and Skinning says the following:
"If you want to apply multiple animations to the same ModelInstance, you can use multiple AnimationControllers, as long as they don't interfere with each other (don't affect the same nodes)."
Example:
ModelInstance myInstance = new ModelInstance( myModel );
AnimationController controllerOne = new AnimationController( myInstance );
AnimationController controllerTwo = new AnimationController( myInstance );
controllerOne.setAnimation( "FirstAnimationId", -1 );
controllerTwo.setAnimation( "SecondAnimationId", -1 );
Then in your render loop, you will also need to call .update(delta) on all of your AnimationControllers:
controllerOne.update( Gdx.graphics.getDeltaTime() );
controllerTwo.update( Gdx.graphics.getDeltaTime() );
Related
I'm trying to get a threeJS Mesh from Autodek Forge objects using the function
'viewer.impl.getRenderProxy(viewer.model, fragId)'.
The problem that I encounter is if I put this function in a loop routine to get Meshs of multiple objects, I get just a random Mesh.
To find out the problem's origin, I used a similar function that is :
'viewer.impl.getFragmentProxy(viewer.model, fragId)'
and it worked just fine.
Her is the routine code that I use and the result :
for(let i = 0, len = nodNamee.length; i < (len); i = i+3){
var instanceTree = viewer.model.getData().instanceTree;
var fragIds = [];
instanceTree.enumNodeFragments(nodNamee[i+1], function(fragId){
fragIds.push(fragId);
});
fragIds.forEach(function(fragId) {
var renderProxy = viewer.impl.getRenderProxy(viewer.model, fragId);
fragtoMesh.push(renderProxy);
//var fragmentproxy = viewer.impl.getFragmentProxy(viewer.model, fragId);
//fragtoProxy.push(fragmentproxy);
});
}
Result :
Arry of fragtoMesh
This is because the getFragmentProxy method always returns the same instance of THREE.Mesh, just with different properties. Basically, the method works like this under the hood:
let cachedMesh = new THREE.Mesh();
// ...
getRenderProxy(model, fragId) {
// Find the geometry, material, and other properties of the fragment
cachedMesh.geometry = fragGeometry;
cachedMesh.material = fragMaterial;
// ...
return cachedMesh;
}
// ...
Note that this is a performance optimization because if the getFragmentProxy (which is only meant for internal use) function returned a new instance every time it's called by other parts of Forge Viewer, it would cause a huge memory churn.
So in your case, if you really need to store all the THREE.Mesh instances in an array, you'll need to clone them or copy their individual properties into separate THREE.Mesh objects.
I'm adding addChild to a custom class that inherits the Entity that I created with RealityComposer, but the Entity is not placed at the tapped position and is displayed in the center of the screen.
I'm using the official sample by Apple of collaborative session creation and implemented so far I've done it. (It doesn't use RealityComposer.)
For example, tapping on an Entity will place it in that location.
However, when I add an Entity to the scene, such as a container that is an addChild of an Entity created by RealityComposer, it always appears in the middle.
My guess is that this is because Entities created with the RealityComposer are not HasModel compliant.
In this code, the Entity is always in the center of the screen
(I've already created a QRScene.rcproject.)
final class QRCardEntity: Entity, HasModel {
let twitterCard = try! QRScene.loadTwitterCard ()
var cardContainer: HasModel {
twitterCard.allChildren().first { $0.name == " CardContainer" }! .children[0] as! HasModel
}
required init() {
super.init()
addChild(twitterCard) // preservingWorldTransform: true does not change this.
}
}
However, this code puts it in the right place.
final class QRCardEntity: Entity, HasModel {
let twitterCard = try! QRScene.loadTwitterCard ()
var cardContainer: HasModel {
twitterCard.allChildren().first { $0.name == "CardContainer" }! .children[0] as! HasModel
}
required init() {
super.init()
model = cardContainer.model
}
}
Extensions used
private extension Entity {
func allChildren() -> [Entity] {
children.reduce([]) { $0 + [$1] + $1.allChildren () }
}
}
I don't think this is the best way.
AddChild is a better way to add a whole Entity while preserving the hierarchical relationship.
This model assignment can only add models from the Top hierarchy, so it doesn't add the other Models needed for display.
How do I get the Entity created by the RealityComposer to appear in the correct position with addChild?
ps 2020/07/03
To be precise, when you tap on Device1, Entity appears in the center and
Device2 shows Entity asynchronously centered no matter where the camera is pointed.
Instead of Entity of created by RealityComposer, using code like this, it works.
addChild(
ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [
SimpleMaterial(color: color ?? .white, isMetallic: false)
]
)
)
If the position is already zero, you could try something like this that may get it looking right, but the positions of your Entities may be a bit back and forth:
twitterCard.position = -twitterCard.visualBounds(relativeTo: nil).center
That may need a little tweaking, but hopefully the intention is clear.
I've had issues understanding the hierarchy that Reality Composer gives to Entities, which is why I have avoided using it.
I solved by myself.
The reason is because I added the scene as child.
This code is to load the scene created by RealityComposer.
TwitterCard was scene name.
let twitterCard = try! QRScene.loadTwitterCard()
Scene name is presented in here.
So, correctly ModelEntity's name is here.
I needed to use this.
This load correctly.
addChild(twitterCard.cardObject!)
Only write this, it could add child Entity created by RealityComposer.
I have problem yet.
I can’t post notification for moving motion created by RealityComposer.
Because of twitterCard is not added to ARView’s scene. If can’t do this, we must write a lot of animation codes.
When needed, I create another post, thanks.
I would like to modify a Timeline while it is running, for example in response to a mouse-click I want to change the target value of the animation. I have tried several methods to do this including (in a mouse clicked handler) :
Pausing the animation by calling pause() on the Timeline, clearing the KeyFrames ObservableList, adding a new KeyFrame and calling play() on the Timeline.
Creating a new KeyFrame with a Cue-name, adding the new frame to the Observable list and calling jumpTo(cueName) on the Timeline.
Some example code is:
String cueName = String.valueOf(System.currentTimeMillis());
KeyValue kv = new KeyValue(myObject.rotateProperty(), -90 + 180.0 * Math.random(), new CustomInterpolator());
KeyFrame kf = new KeyFrame(Duration.seconds(10), cueName, kv);
startupTimeline.getKeyFrames().add(kf);
startupTimeline.jumpTo(cueName);
startupTimeline.play();
Neither of these appear to work, the animation just stops.
Should I be able to modify the KeyFrame list of an existing Timeline or do I need to create a new Timeline if I want to change an animation while it is executing?
To the best of my knowledge, a Timeline can't be changed in that manner once it has started playing. The issue is that you might change the total cycle duration, which would confuse all the interpolation computations.
You probably need an AnimationTimer for this. AnimationTimer has an abstract handle(long timestamp) method which takes a time stamp (in nano-seconds), and is invoked every time the scene graph is rendered. So you can do something like:
AnimationTimer animation = new AnimationTimer() {
private long startTime = -1;
#Override
public void handle(long timestamp) {
if (startTime == -1) {
startTime = timestamp ;
}
long totalElapsedNanoseconds = timestamp - startTime ;
// update UI based on elapsed time...
}
}
The handle() method is invoked on the JavaFX Application Thread, so it is safe to update the UI, and to reference any variables that are only changed on the same thread.
When using a TextureAtlas to create a Sprite with the createSprite method, the LibGDX documentation says: This method uses string comparison to find the region and constructs a new sprite, so the result should be cached rather than calling this method multiple times.
How do I cache these results? Is it just a variable I create to store the created sprite? If so then how do I create different copies of the same sprite?
Each time you use the createSprite method, a new Sprite gets created. Usually you'd have one sprite per enemy for example. Let's say that you have a class Frog which is one of your enemies. It should look like this (pseudo-code):
public class Frog {
private Sprite sprite;
public Frog(TextureAtlas atlas) {
sprite = atlas.createSprite("frog");
}
public void update(float deltaTime) {
// update the sprite position
}
public void render(Batch batch) {
sprite.draw(batch);
}
}
Now each Frog would have its own Sprite. This is necessary, since all frogs can be in different places. The position will be configured via the Sprite. You will create the sprite just once in the constructor and all of those sprites share the same TextureRegion of the same TextureAtlas, which will result in a good performance, since there won't be many texture switches on the graphics card, when you render your frogs.
I am new to game development but familiar with programming languages. I have started using Flixel and have a working Breakout game with score and lives.
What I am trying to do is add a Start Screen before actually loading the game.
I have a create function that adds all the game elements to the stage:
override public function create():void
// all game elements
{
How can I add this pre-load Start Screen? I'm not sure if I have to add in the code to this create function or somewhere else and what code to actually add.
Eventually I would also like to add saving, loading, options and upgrades too. So any advice with that would be great.
Here is my main game.as:
package
{
import org.flixel.*;
public class Game extends FlxGame
{
private const resolution:FlxPoint = new FlxPoint(640, 480);
private const zoom:uint = 2;
private const fps:uint = 60;
public function Game()
{
super(resolution.x / zoom, resolution.y / zoom, PlayState, zoom);
FlxG.flashFramerate = fps;
}
}
}
Thanks.
The way that I usually do it is with a different FlxState - I use one for "Menu", the game itself, and the Game Over screen.
So make a new class that extends FlxState, call it maybe "MenuState" and then say:
super(resolution.x / zoom, resolution.y / zoom, MenuState, zoom);
Inside MenuState, on a button press or something, say:
FlxG.switchState(PlayState);