I have two different threejs scenes and each has its own camera. I can control each camera individually with a corresponding TrackballControls instance.
Is there a reliable way to 'lock' or 'bind' these controls together, so that manipulating one causes the same camera repositioning in the other? My current approach is to add change listeners to the controls and update both cameras to either's change, but this isn't very neat as, for one, both controls can be changing at once (due to dampening).
I believe it should work if you set the matrices of the second camera to the values of the first and disable automatic matrix-updates of both cameras:
camera2.matrix = camera1.matrix;
camera2.projectionMatrix = camera1.projectionMatrix;
camera1.matrixAutoUpdate = false;
camera2.matrixAutoUpdate = false;
But now you need to update the matrix manually in your renderloop:
camera1.updateMatrix();
That call will take the values for position, rotation and scale (that have been updated by the controls) and compose them into camera1.matrix, which per assignment before is also used as the matrix for the second camera.
However, this feels a bit hacky and can lead to all sorts of weird problems. I personally would probably prefer the more explicit approach you have already implemented.
Question is why are you even using two camera- and controls-instances? As long as the camera isn't added to the scene you can just render both scenes using the same camera.
Is it possible to use the Observer or Publisher design patterns to control these objects?
It seems that you are manipulating the cameras with a control. You might create an object that has the same control interface, but when you pass a command to the object, it repeats that same command to each of the subscribed or registered cameras.
/* psuedo code : es6 */
class MasterControl {
constructor(){
this.camera_bindings = [];
}
control_action1(){
for( var camera of this.camera_bindings ){
camera.control_action1();
}
}
control_action2( arg1, arg2 ){
for( var camera of this.camera_bindings ){
camera.control_action2( arg1, arg2 );
}
}
bindCamera( camera ){
if( this.camera_bindings.indexOf( camera ) === -1 ){
this.camera_bindings.push( camera );
}
}
}
var master = new MasterControl();
master.bindCamera( camera1 );
master.bindCamera( camera2 );
master.bindCamera( camera3 );
let STEP_X = -5;
let STEP_Y = 10;
//the following command will send the command to all three cameras
master.control_action2( STEP_X, STEP_Y );
This binding is self created rather than using native three.js features, but it is easy to implement and can get you functional quickly.
Note: I wrote my psuedocode in es6, because it is simpler and easy to communicate. You can write it in es5 or older, but you must change the class definition into a series of functional object definitions that create the master object and its functionality.
Related
I have a scene in Three (using AFrame) that requires environment mapping for lighting. Using a standard HDR cubemap, I get the following results:
This is correct as far as blurring based on roughness goes, since I have mipmaps being generated and the minFilter set to LinearMipmapLinearFilter. The issue with this approach is that ambient lighting isn't being applied - the light in the scene, a directional light, is the only thing providing any lighting information to the scene. Unfortunately, this results in entirely black shadows, no matter how bright the HDRI.
However, if I use the PMREMGenerator from Three in addition to the above, the ambient lighting issue is solved. Unfortunately, this is what happens as a result:
As shown here, the texture filtering is now out of whack. According to comments left in the PMREMGen script itself.
This class generates a Prefiltered, Mipmapped Radiance Environment Map (PMREM) from a cubeMap environment texture. This allows different levels of blur to be quickly accessed based on material roughness. It is packed into a special CubeUV format that allows us to perform custom interpolation so that we can support nonlinear formats such as RGBE. Unlike a traditional mipmap chain, it only goes down even more filtered 'mips' at the same LOD_MIN resolution, associated with higher roughness levels. In this way we maintain resolution to smoothly interpolate diffuse lighting while limiting sampling computation.
...which leads me to believe the output should be smoothed, like in my first example.
Here's my code for the first example:
const ctl = new HDRCubeTextureLoader();
ctl.setPath(hdrPath);
ctl.setDataType(THREE.UnsignedByteType);
const hdrUrl = [
`${src}/px.hdr`,
`${src}/nx.hdr`,
`${src}/py.hdr`,
`${src}/ny.hdr`,
`${src}/pz.hdr`,
`${src}/nz.hdr`
];
const hdrSky = ctl.load(hdrUrl, tex => this.skies[src] = hdrSky );
// Then later...
obj.material.envMap = this.skies[this.skySources[index]];
obj.material.needsUpdate = true;
Here's my code for the second example:
const ctl = new HDRCubeTextureLoader();
ctl.setPath(hdrPath);
ctl.setDataType(THREE.UnsignedByteType);
const hdrUrl = [
`${src}/px.hdr`,
`${src}/nx.hdr`,
`${src}/py.hdr`,
`${src}/ny.hdr`,
`${src}/pz.hdr`,
`${src}/nz.hdr`
];
const pmremGen = new PMREMGenerator(this.el.sceneEl.renderer);
pmremGen.compileCubemapShader();
const hdrSky = ctl.load(hdrUrl, tex => {
const hdrRenderTarget = pmremGen.fromCubemap(hdrSky);
this.skies[src] = hdrRenderTarget.texture;
});
// Then later...
obj.material.envMap = this.skies[this.skySources[index]];
obj.material.needsUpdate = true;
I seem to have hit a wall in regard to the filtering. Even when I explicitly change the filtering type and turn on mipmap generation inside of PMREMGenerator.js, the results appear to be the same. Here's an official example that uses the PMREMGenerator without any issue: https://threejs.org/examples/webgl_materials_envmaps_hdr.html
As a closing remark I will note that we're using Three.js r111 (and there are reasons we can't switch it out fully), so I brought in the current version of the PMREMGenerator from the latest version of Three as of this writing (r122 - later version needed as the r111 one is written completely differently). Thus, I wouldn't be surprised if this was all being caused by some conflict between versions.
EDIT: Just put the resulting envmap as a standard map on some planes, and much to my surprise none of the blurred LODS even show up. Here's what mine look like:
And here's what it should resemble (don't mind the torus knot):
EDIT: I've found a workaround for now (essentially, not using PMREMGenerator), but will leave this up in case a solution is discovered.
I believe the problem is with the argument you're using in fromCubemap(). Right now your code isn't using the tex cubetexture variable that's passed to the loaded callback. Try this:
const hdrSky = ctl.load(hdrUrl, tex => {
// use tex instead of hdrSky
const hdrRenderTarget = pmremGen.fromCubemap(tex);
this.skies[src] = hdrRenderTarget.texture;
});
I'm fairly new to three.js and trying to get a better understanding of ray casting. I have used it so far in a game to show when an object collides with another on the page which works perfectly. In the game I'm building this is to take health from the hero as it crashes into walls.
I am now trying to implement a target which when hovered over some objects it will auto shoot. However the target only registers a target hit once the object passes through the target mesh rather than when its ray is cast through it.
to further detail this, the ray is cast from the camera through the target, if the target is on a mesh (stored as an object array) then I want it to trigger a function.
within my update function I have this:
var ray = new THREE.Raycaster();
var crossHairClone = crossHair.position.clone();
var coards = {};
coards.x = crossHairClone.x
coards.y = crossHairClone.y
ray.setFromCamera(coards, camera);
var collisionResults = ray.intersectObjects( collidableMeshList );
if ( collisionResults.length > 0 ) {
console.log('Target Hit!', collisionResults)
}
The console log is only triggered when the collidableMeshList actually touches the target mesh rather than when it is aiming at it.
How do I extend the ray to pass through the target (which I thought it was already doing) so that if anything hits the ray then it triggers my console log.
Edit
I've added a URL to the game in progress. There are other wider issues with the game, my current focus is just the target ray casting.
Game Link
Many thanks to everyone who helped me along the way on this one but I finally solved it, only not through the means that I had expected to. Upon reading into the setFromCamera() function it looks like it is mainly used around mouse coards which in my instance was not what I wanted. I wanted a target on the page and the ray to pass through this. For some reason my Ray was shooting vertically up from the centre of the whole scene.
In the end I tried a different approach and set the Rays position and direction directly rather rely on setting it from the cameras position.
var vector = new THREE.Vector3(crossHair.position.x, crossHair.position.y, crossHair.position.z);
var targetRay = new THREE.Raycaster(camera.position, vector.sub(camera.position).normalize());
var enemyHit = targetRay.intersectObject( enemy );
if ( enemyHit.length > 0 ) {
console.log('Target Hit!', targetRay)
}
I'll leave this up for a while before accepting this answer in case anyone has a better approach than this or has any corrections over what I have said in regards to setFromCamera().
I am trying to make use of Raycaster in a ThreeJS scene to create a sort of VR interaction.
Everything works fine in normal mode, but not when I enable stereo effect.
I am using the following snippet of code.
// "camera" is a ThreeJS camera, "objectContainer" contains objects (Object3D) that I want to interact with
var raycaster = new THREE.Raycaster(),
origin = new THREE.Vector2();
origin.x = 0; origin.y = 0;
raycaster.setFromCamera(origin, camera);
var intersects = raycaster.intersectObjects(objectContainer.children, true);
if (intersects.length > 0 && intersects[0].object.visible === true) {
// trigger some function myFunc()
}
So basically when I try the above snippet of code in normal mode, myFunc gets triggered whenever I am looking at any of the concerned 3d objects.
However as soon as I switch to stereo mode, it stops working; i.e., myFunc never gets triggered.
I tried updating the value of origin.x to -0.5. I did that because in VR mode, the screen gets split into two halves. However that didn't work either.
What should I do to make the raycaster intersect the 3D objects in VR mode (when stereo effect is turned on)?
Could you please provide a jsfiddle with the code?
Basically, if you are using stereo in your app, it means you are using 2 cameras, therefore you need to check your intersects on both cameras views, this could become an expensive process.
var cameras =
{ 'camera1': new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000),
'camera2': new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000)
};
for (var cam in cameras) {
raycaster.setFromCamera(origin, cameras[cam]);
//continue your logic
}
You could use a vector object that simulates the camera intersection to avoid checking twice, but this depends on what you are trying to achieve.
I encountered a similar problem, I eventually found the reason. Actually in StereoEffect THREE.js displays the meshes on the two eyes, but in reality is actually adds only one mesh to the scene, exactly in the middle of the line left-eye-mesh <-> right-eye-mesh, hidden to the viewer.
So when you use the raycaster, you need to use it on the real mesh on the middle, not the illusion displayed on each eye !
I detailled here how to do it
Three.js StereoEffect displays meshes across 2 eyes
Hopes it solves your problem !
You can use my StereoEffect.js file in your project for resolving problem. See example of using. See my Raycaster stereo pull request also.
I am using C++11 with GNU tool chain with gtkmm3, on Ubuntu 12.04 LTS 32 bit.
I have been playing wtih some of the examples for gtkmm3 in Programming with gtkmm 3.
Based on 17.2.1.Example there, I inherited from Gtk::DrawingArea (MyDrawingArea here) and overrode the on_draw() event handler as follows:
MyDrawingArea.hpp
...
protected:
bool on_draw ( const Cairo::RefPtr<Cairo::Context>& cr ) override;
MyDrawingArea.cpp
bool MyDrawingArea::on_draw( const Cairo::RefPtr<Cairo::Context>& cr )
{
Gtk::Allocation allocation = get_allocation( );
const int width = allocation.get_width( );
const int height = allocation.get_height( );
int coord1{ height - 3 };
cr->set_line_width( 3.0 );
this->get_window( )->freeze_updates( );
cr->set_source_rgb( 0, 0.40, 0.60 );
cr->move_to( 0, coord1 );
cr->line_to( width, coord1 );
cr->stroke( );
cr->set_source_rgb( 1, 0.05, 1 );
cr->move_to( mXStart, coord1 );
cr->line_to( mXStart, mYAxis * 1.5 );
cr->show_text( to_string( mYAxis ) );
cr->stroke( );
mXStart += 5;
this->get_window( )->thaw_updates( );
return true;
}
My goal is to draw a simple bar graph based on a calculation I do in a little test application, the idea being that each time the on_draw() event is called, the next bar would be moved 5 units to the right on mXAxis and a vertical line would be drawn based on the new mYaxis value, which is computed based on the results of the new calculation.
When I want to repaint my graph and trigger the MyDrawingArea::on_draw() event, I call MyDrawingArea.show_all() from my application after the calculation has completed, and new x and y axes have been set.
However, this does not work as I expected: MyDrawingArea.show_all() invalidates the entire drawing window and draws from scratch: the new graph line appears in its proper place, but the previous ones are erased. I also tried MyDrawingArea.queue_draw(), which had the same effect. But I want to persist the previous graph results so I can get a profile of the calculation results, as I calculate with different values.
This implementation is also causing the bottom line on my graph (my x axis on the graph)- drawn by the first stroke() call in my code example, to be rendered anew on each call to on_draw() - although this should not be necassary since this line persists for the lifetime of MyDrawingArea - it should not be necessary to invalidate and then re-draw it on each new on_draw() event, as my code is currently doing, because I am haven't yet found a way to handle this.
I am very new to Cairo, so I'm sure I'm probably doing this completely wrong, but explicit, task-oriented documentation appears to be sparse - have not found anything that explains how to do this, although I'm sure it is quite simple.
What do I need to do to draw a new line on Gtk::DrawingArea, while persisting previous graph lines that have already been drawn on previous passes, and establish graphics elements that will persist for the lifetime of the Gtk::DrawingArea widget. Obviously using show_all() or queue_draw() and doing it all in the on_draw() event is not the way to go.
In general, you must draw the entire widget and Cairo will clip the drawing to the predefined dirty region. See also GTK reference manual for the "GtkWidget::draw" signal for performance tips:
The signal handler will get a cr with a clip region already set to the
widget's dirty region, i.e. to the area that needs repainting.
Complicated widgets that want to avoid redrawing themselves completely
can get the full extents of the clip region with
gdk_cairo_get_clip_rectangle(), or they can get a finer-grained
representation of the dirty region with
cairo_copy_clip_rectangle_list().
So you may be able to redraw only the region you want with gtk_widget_queue_draw_area().
G'day all,
In short, I'm using a for loop to create a bunch of identical sprites that I want to bounce around the screen. The problem is how do I write a collision detection process for the sprites. I have used the process of placing rectangles around sprites and using the .intersects method for rectangles but in that case I created each sprite separately and could identify each one uniquely. Now I have a bunch of sprites but no apparent way to pick one from another.
In detail, if I create an object called Bouncer.cs and give it the movement instructions in it's update() method then create a bunch of sprites using this in Game.cs:
for (int i = 1; i < 5; ++i)
{
Vector2 position = new Vector2(i * 50, i * 50);
Vector2 direction = new Vector2(i * 10, i * 10);
Vector2 velocity = new Vector2(10);
Components.Add(new Bouncer(this, position, direction, velocity, i));
}
base.Initialize();
I can draw a rectangle around each one using:
foreach (Bouncer component1 in Components)
{
Bouncer thing = (Bouncer)component1;
Rectangle thingRectangle;
thingRectangle = new Rectangle((int)thing.position.X, (int)thing.position.Y, thing.sprite.Width, thing.sprite.Height);
But now, how do I check for a collision? I can hardly use:
if (thingRectangle.Intersects(thingRectangle))
I should point out I'm a teacher by trade and play with coding to keep my brain from turning to mush. Recently I have been working with Python and with Python I could just put all the sprites into a list:
sprites[];
Then I could simply refer to each as sprite[1] or sprite[2] or whatever its index in the list is. Does XNA have something like this?
Please let me know if any more code needs to be posted.
Thanks,
Andrew.
One solution, which I use in my game engine, is to have a Logic code run inside the objects for every game Update, ie. every frame. It seems you already do this, according to the variable names, which indicate you run some physics code in the objects to update their positions.
You might also want to create the collision rectangle inside the Bouncer's constructor so it's more accessible and you make good use of object oriented programming, maybe even make it an accessor, so you can make it update every time you call it instead of manually updating the bounding/collision box. For example:
public Rectangle #BoundingBox {
get { return new Rectangle(_Position.X, _Position.Y, width, height); }
}
Whichever way works, but the collision checks can be run inside the Bouncer object. You can either make the reference list of the Bouncer objects static or pass it to the objects itself. The code for collisions is very simply:
foreach(Bouncer bouncer in Components) //Components can be a static List or you can pass it on in the constructor of the Bouncer object
{
if (bouncer.BoundingBox.Intersects(this.BoundingBox))
{
//they collided
}
}