I started using this AFrame example to mess around with graphs in VR:
https://github.com/vasturiano/3d-force-graph-vr
I needed to have more control over the scene so I moved to the lower level component:
https://github.com/vasturiano/aframe-forcegraph-component
Everything works nicely but I am now trying to bridge the gap between the lower-level three.js objects that are attached to the forcegraph AFrame entity and the VR controls at the scene level. I would like to add basic manipulation of the graph with VR controllers and/or hands but I am failing to conceptually connect the 2 up. The comments on the repos basically just say "use the lower-level frameworks to do this".
The concrete question: how do I add the ability to drag a node with this component? I know I'll have to write custom code to do it, just unsure actually how and where to do it. I haven't even gotten to a point where I know where to start trying to do this.
In other examples of this, the ability to manipulate the graph is there but only for a mouse and not in VR. I'm fine with hacking it so it only works for VR (specifically the Meta Quest 2), don't need to worry about anything else at the moment. I am using other AFrame level components too so I would like to stay at the AFrame level of abstraction as much as possible.
I am seeking comments on general direction, pseudocode, stubs, etc. I feel like I'm missing something small here to figure this out. I am pretty much a beginner.
Aframe version: 1.3.0
Updated question:
Ability to move the nodes in the graph using oculus quest 2 controllers
Ability to twist/turn/scale in/scale out the entire graph using oculus quest 2 controllers
Related
May I have a 2D layer for UI, Text, Buttons, etc over the 3D scene in ThreeJS?
Ideally something like engine from PixiJS inside ThreeJS? I've seen PixiJS offers some 3D features so why not combine both libraries in something super-powerful? I just do not want to place any HTML Dom elements over WebGL canvas as this will probably slow down performance on Mobile devices.
One way to solve this issue is to implement the UI as screen space sprites like demonstrated in the following official example (check out how the red sprites are rendered):
https://threejs.org/examples/webgl_sprites
The idea is to render them with a separate orthographic camera and an additional call of WebGLRenderer.render(). Besides, instances of THREE.Sprite do support raycasting which is of course useful when implementing interaction.
Building up on Mugen87's answer, you can also use THREE.Shape to make visual containers adapted to the user screen size :
https://threejs.org/docs/#api/en/extras/core/Shape
You can use THREE.Shape to make mesh-based text, is illustrated in this example :
https://threejs.org/examples/?q=text#webgl_geometry_text_shapes
You should also have a look at three-mesh-ui, an add-on for building mesh-based user interface with three.js :
https://github.com/felixmariotto/three-mesh-ui
I dont know how many people have use SFML, but I basically want to draw my GUI and am unsure of how to do so.
To clarify I know how to draw a GUI but I don't know the 'correct' way to do so.
Currently I am drawing a GUI in the same RenderWindow that is used to draw the Game.
I have started to introduce Views into my game, I have a Game View and a GUI View, which take up 75% and 25% of the screens height (respectively).
Now the question is:
Should I render the GUI in the same RenderWindow but in a portion of the 'map' the player is unable to reach and have the GUI View locked on that location displaying the GUI.
Another idea I have thought of (unsure if it is plausible) is to have a second RenderTarget which renders the GUI and is dispalyed in the GUI View.
If there is a method I have not discovered or one that is recommend I am happy to hear about it, I searched but all I have found is the SFML Documentation in which I couldn't find my answer.
Your question is slightly confusing as I do not know whether you are having a design issue, or a technical issue of implementing the UI. In terms of the design issue:
I have started to introduce Views into my game, I have a Game View and a GUI View, which take up 75% and 25% of the screens height (respectively).
I haven't encountered where using multiple views will be needed for an UI. Take this example:
The UI consists of all the images in the black area. The positions' of those images are always updated relative to the position of the game view that follows the player around, nothing more. A second view isn't needed because the components (images) of the UI follow the view around, just as the view follows the player around.
Now, if you are having a technical issue, then please elaborate on the exact issue that has some accompanying code and I will do everything I can to help out.
I've got the task to program a graphical network editor application as a university project. For that I need three types of items/controls. A circle shape, a rectangular shape and arrows to connect the other shapes (the whole thing works somewhat like MS Visio in some ways). The shapes/controls need some additional features like moving, scaling a context menu etc. Also I need to have full control over the graphical representation of these objects i.e. I want to 'draw' them myself or at least be able to modify them as I need.
I am using JavaFX and have little to no experience with it. So I was wondering, what would be the best way to implement these custom controls. It is required to use JDK 7, so using SkinBase and BehaviourBase is not an option, since they are private before JDK 8.
I was thinking about subclassing Path or Canvas to use as my controls. But I know too little about the implications to make an informed decision.
Could someone give me some advice, which base classes to consider and what implications that might have?
Thx alot.
This answer is just going to be advice, there is no real right or wrong answer here. Advice is necessarily opinionated and not applicable to all situations, if you don't agree with it, or it doesn't apply to your situation, just ignore it.
Sorry for the length, but the question is open-ended and the potential answer is complicated.
I was thinking about subclassing Path or Canvas to use as my controls.
Don't. Favor composition over inheritance. Have a control class which implements the functional interface of the control and works regardless of whatever UI technology is behind it. Provide the control class a reference to a skin class which is the UI representation of the control. The skin will specify how to render the control in a given state (getting the control state from the associated control object). Also, the skin will be the thing which responds to user manipulations, such as a mouse presses and instructs the control to change its state based upon the mouse press - so the skin knows what control it is associated with and vice versa. In the control class have bindable properties to represent the control state.
A simple example of this is the Square and SquareSkin from this tic-tac-toe game implementation.
For example, imagine a checkbox control. The bindable property of control might be an enum with states of (checked, unchecked, undetermined). A skin might render the checkbox as a square box with a tick mark to represent the checked state. Or maybe the skin will render a rounded edge with an X. The skin could use whatever technology you want to render the checkbox, e.g. a canvas or a collection of nodes. The skin registers listeners for mouse clicks, key presses, etc. and tells the checkbox control to set its check state. It also has a listener on the check state and will choose whether to render the check tick based on that state.
The key thing is that the API interface to the checkbox is just a checkbox control class with a check state. The way the UI is handled is abstracted away from the checkbox API, so you can swap the UI implementation in and out however you want without changing any other code.
Subclassing Path is quite different from subclassing Canvas. For the situation subscribe, I would definitely favor subclassing a Shape node (or Region) as opposed to a Canvas. Using Nodes, you automatically have a really rich UI rendering and event model, bindable property set and and painting framework which you won't get with a Canvas. If you don't use Nodes, you will likely end up trying to re-create and build some parts of the Node functionality in some sub-standard way. Canvas is great for things like porting 2D games or graphing engines from other frameworks or building things that are pixel manipulators like a particle system, but otherwise avoid it.
Consider using a Layout Pane for your control skin, e.g. a StackPane or something like that. The layout panes are containers, so you can place stuff inside them and use aggregation and composition to build up more complex controls. Layout panes can also help with laying out your nodes.
Anything which subclasses a region like a Pane can be styled via CSS into arbitrary shapes and colors. This is actually how the in-built checkbox (and other controls) in JavaFX work. The checkbox is a stack of two regions, one the box and the other the check. Both are styled in CSS - search for .check-box in the modena.css stylesheet for JavaFX. Note how -fx-shape is used to get the tick shape by specifying an svg path (which you can create in an svg editor such as inkscape). Also note how background layering is used to get things like focus rings. The advantage of using CSS for style is that you can stylistically change your controls without touching your Java code.
so using SkinBase and BehaviourBase is not an option
Even though you have decided to not use these base classes (which I think is an OK decision even if you are targeting only Java 8+), I think it is worthwhile studying the design of the controls in JavaFX source. Study button or checkbox, so you can see how the experts do these kind of things. Also note that such implementations may be overkill for your application because you aren't building a reusable control library. You just need something which will work well and simply within the confines of your application (this is the reason I don't necessarily recommend extending SkinBase for all applications).
As a minimum read up on Controls on the open-jfx wiki.
It is required to use JDK 7
IMO, there is not much point in developing a new JavaFX application to target Java 7. There were many bug fixes for Java 8, plus new and useful features added. In general, Java 7 is a sub-optimal target platform for JavaFX applications. If you package your application as a self-contained application, then you can ship whatever Java platform you want with your application, so the target platform doesn't matter.
IMO deployment technologies such as WebStart or browser embedded applications (applets), which can make use of pre-installed java runtimes on a system, are legacy deployment modes which are best avoided for most applications.
Still, there might be a situation where you absolutely must have Java 7 due to some constraints outside your control, so I guess just evaluate carefully against your situation. If you have such constraints and must build on a stable, legacy UI toolkit which works with runtimes released years ago, you could always use Swing.
From oracle, the Path class represents a simple shape and provides facilities required for basic construction and management of a geometric path while Canvas is an image that can be drawn on using a set of graphics commands provided by a GraphicsContext.
Which means the user can use canvas to draw the required shapes as the user wants.
On of the basic ways I can say would be as follows:
Use canvas as it will help and allow for custom user "drawing". Creating menu buttons for the circles, rectangles, arrows etc. like in MS Paint. Onclick drag function on the canvas can be used for "drawing".
Circle and Rectangle are available in the libraries which you can use
You can use arrows like the way used here.
I think its the same example as you want to create.
Custom controllers and context menus can be created too docs.oracle.com
Scaling and Moving can be done by binding drag function to a scale
or translate animation. Mousepressed and Mousereleased can be used to track the coords to draw the shape in.
This is my advice, you can use the control prototype.Or an application or design tool which can provide you the targeted control or a icon with circle shape, a rectangular shape and arrows. I don't know how my advice work. You can try it.
I've been working with an animator to help with my game. The animations all work fine using morph targets, but the file size just gets way too large. Skeletal animations are the answer. We've spent a week working to get the animations exported from blender correctly.
After reading many many articles we were able to get basic animations working correctly. I make sure to set the armature to rest pose and export on the first frame and all that, but the more complicated animations are off.
You can see in this example here (click to cycle animations):
http://www.titansoftime.com/beta/animation2.html
My animator said the problems are related to bone constraints using his controllers. He said his technique is called "Inverse Kinematics".
Anyone have any ideas?
I have found the answer. For one you can not scale the geometry in the json loader (however you can scale the mesh object once created).
The main thing is that my animator was using inverse kinematics, which apparently three.js does not play nice with.
I'm creating an in-car control screen (will be run from a Mac Mini) and am looking for some libraries or code samples for "effects". For example, I might want the name of the current track playing to fly in from the right. I might want screens to fade or slide up, etc.
I am aware that I can manually write these effects in Objective-C.
I am hoping there is a library like scriptaculous for JavaScript that allows me to easily manipulate an existing TextView, ImageView, etc.
A framework or otherwise is preferred. I'm working in native cocoa. I don't mind if the library costs $.
Thanks,
Rick
Have a look at Core Image and Core Animation, both of which will allow you to add visual effects. Core Image, as its name implies, works with images only but can do fancy transitions. You can "fake" UI animations with it though by rendering a view to an image, swapping the image in in place or over the top of the view and then running a transition to another view.
Core Animation works directly with Cocoa Views and does have some transitions available. Both APIs (especially Core Animation) are fairly complex and have a learning curve.