Why I cannot find Scene Time node in Geometry Nodes in Blender? - time

I am working in Blender and trying to follow some You Tube video tutorials. My following certain videos comes to a halt when I am unable to find one particular node that is supposed to be in Geometry Nodes, "Scene Time". I have gotten pretty good at slowing down and pausing the tutorial so I can see what is being clicked and typed, but I am unsuccessful when attempting to locate the elusive "Scene Time" node that is suppose to be in Geometry nodes. If anyone can give me fix as to why I am unable to locate this node and/or guide me towards finding this node, it would be greatly appreciated. Thanks

It's under Input -> Scene Time. It was changed in Blender 3.1: https://wiki.blender.org/wiki/Reference/Release_Notes/3.1/Nodes_Physics "It's no longer necessary to add a #frame driver to get the scene's current animation time, the Scene Time node does that instead". You have probably old version of Blender.

This node is added in 3.1 version ... update your blender to 3.1 or higher and you probably will

Related

Large lagging on mouse movement with SketchUp Dae model

I’ve designed a 3D model in SketchUp and I didn’t use any texture. I’m faced with an issue related with lagging on mouse move and rotate process. When I exported the model by Dae format and imported to the three js online editor (three js online editor) mouse movement is being very slow. I think it occurs fps drop. I couldn’t understand what’s problem with my model that I designed. I need your suggestions and ideas how to resolve this issue. Thanks for your support. I’ve uploaded 3D model’s image. Please take a look.
Object Count: 98.349, Vertices: 2,107.656, Triangles: 702.552
Object Count: 98.349,
The object count results in an equal number draw calls. Such a high value will degrade the performance no matter how complex the respective geometry eventually is.
I suggest you redesign the model and ensure to merge individual objects as much as possible. Also try to lower the number of vertices and faces.
Keep in mind that three.js does not automatically merge or batch render items. So it's your responsibility to optimize assets for rendering. It's best to do this right when designing the model. Or in code via methods like BufferGeometryUtils.mergeBufferGeometries() or via instanced rendering.

Best data structure for point cloud updates?

I'm working on a robot using the new jetson nano. I've got points generating from the depth image of my camera and am working towards creating a scene as the robot moves around. My issue is just throwing points into the data structure every frame would make me run out of memory super quickly. Thus I want to have some heuristic that says if a point meets some condition don't add it.
For this I imagine I need an acceleration structure like an Octree, KDTree, BVH Hierarchy, or maybe something else. While I am familiar with them and find lots of info on how to build them, I'm a little confused on which of them would be easiest to update each frame or if some require complete rebuilds compared to incremental rebuilds. Could some be parallelized? Any insight on what type data structure maybe with a link about it would super helpful.
Edit:
I believe the best structure for this is likely a Sparse Voxel Octree. You can find some general ideas of how to do so with this blog from Nvidia. https://devblogs.nvidia.com/thinking-parallel-part-iii-tree-construction-gpu/ .
If a morton code maps to a specific voxel that voxel is 'filled'. Redundant points are automatically taken care of as voxel is either filled or unfilled. For removal I think i can do ray tracing on the octree and if I collide with a filled voxel before I expect too delete the existing voxel. There are some resolution problems, but I think I can handle this with a hybrid approach.

Efficiently rendering tiled map using SpriteKit

As an exercise, I decided to write a SimCity (original) clone in Swift for OSX. I started the project using SpriteKit, originally having each tile as an instance of SKSpriteNode and swapping the texture of each node when that tile changed. This caused terrible performance, so I switched the drawing over to regular Cocoa windows, implementing drawRect to draw NSImages at the correct tile position. This solution worked well until I needed to implement animated tiles which refresh very quickly.
From here, I went back to the first approach, this time using a texture atlas to reduce the amount of draws needed, however, swapping textures of nodes that need to be animated was still very slow and had a huge detrimental effect on frame rate.
I'm attempting to display a 44x44 tile map where each tile is 16x16 pixels. I know here must be an efficient (or perhaps more correct way) to do this. This leads to my question:
Is there an efficient way to support 1500+ nodes in SpriteKit and which are animated through changing their textures? More importantly, am I taking the wrong approach by using SpriteKit and SKSpriteNode for each tile in the map (even if I only redraw the dirty ones)? Would another approach (perhaps, OpenGL?) be better?
Any help would be greatly appreciated. I'd be happy to provide code samples, but I'm not sure how relevant/helpful they would be for this question.
Edit
Here are some links to relevant drawing code and images to demonstrate the issue:
Screenshot:
When the player clicks on the small map, the center position of the large map changes. An event is fired from the small map the central engine powering the game which is then forwarded to listeners. The code that gets executed on the large map the change all of the textures can be found here:
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/GameScene.swift#L489
That code uses tileImages which is a wrapper around a Texture Atlas that is generated at runtime.
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/TileImages.swift
Please excuse the messiness of the code -- I made an alternate branch for this investigation and haven't cleaned up a lot of residual code that has been hanging around from pervious iterations.
I don't know if this will "answer" your question, but may help.
SpriteKit will likely be able to handle what you need but you need to look at different optimizations for SpriteKit and more so your game logic.
SpriteKit. Creating a .atlas is by far one of the best things you can do and will help keep your draw calls down. Also as I learned the hard way keep a pointer to your SKTextures as long as you need them and only generate the ones you needs. For instance don't create textureWithImageNamed#"myImage" every time you need a texture for myImage instead keep reusing a texture and store it in a dictionary. Also skView.ignoresSiblingOrder = YES; helps a bunch but you have to manage your own zPosition on all the sprites.
Game logic. Updating every tile every loop is going to be very expensive. You will want to look at a better way to do that. keeping smaller arrays or maybe doing logic (model) updates on a background thread.
I currently have a project you can look into if you want called Old Frank. I have a map that is 75 x 75 with 32px by 32px tiles that may be stacked 2 tall. I have both Mac and iOS target so you could in theory blow up the scene size and see how the performance holds up. Not saying there isn't optimization work to be done (it is a work in progress), but I feel it might help get you pointed in the right direction at least.
Hope that helps.

Renderer running two programs and high vertices/faces

I'm developing a little game in ThreeJs and have stumbled upon a problem I seem not to understand.
Im creating a game instance, after it I'm loading my enemies which are added to an array of enemies. Before I can do it i load my 3D objects with loader.load('assets/data/', callback()) and then create the enemies.
However there are two problems here. As soon as the callback is called my renderer show 2 programs running and the second problem is I'm having huge amount of vertices and faces on only 10 enemy meshes!
Is it that i must go back to blender and set the vertices/faces lower somehow or what am I doing wrong?
Just hint if you need more info!
Regards.

SceneKit - Occlusion culling

I've played with SceneKit on iOS 8 for quite a while, and recently, I run into a situation in which I need to detect if a node does not appear on the viewport. Occlusion culling might be a possible solution. Therefore, is there any occlusion culling option available from SceneKit, and if not, what are other suggestions that I might wanna try? Thanks!
The isNodeInsideFrustum:withPointOfView:
method tells you if a node is inside the camera's field of view, but it won't tell you whether it's occluded by other scene geometry.
If you need occlusion testing, a frustum test is a good place to start. Once you know a node is in the view frustum, you can do hit tests to see if there are any nodes in between. If the results of a hit test include nodes other than your target, it may be at least partially obscured.
Hit testing won't get you extreme detail (like whether any rendered pixels of one node would be visible behind those of other nodes), but it might be enough for what you need. You can refine the sensitivity of hit testing a bit with the options parameter and by choosing which points to test — e.g. just the center of a target node or the corners of its bounding box. Hit testing has a CPU performance cost, too, so you'll have to find the right tradeoff between the functionality you want and your target frame rate.
SCNView, via the SCNSceneRenderer protocol implements
isNodeInsideFrustum:withPointOfView:
It let you test if a node is visible from a given camera.
https://developer.apple.com/library/mac/documentation/SceneKit/Reference/SCNSceneRenderer_Protocol/Reference/SCNSceneRenderer.html#//apple_ref/occ/intfm/SCNSceneRenderer/isNodeInsideFrustum:withPointOfView:

Resources