Attaching child nodes/objects to an Object3D in three.js - three.js

I'm loading an .obj file into a Object3D object. That's working well and I can see it on the screen. However, I would like to create the impression of spinning sprites (fireflies, lightning globes, that sort of thing) at certain points above the object.
I've been looking over the three.js documentation on sprites and other things, and am very impressed with the capabilities. But I need a little help on how to create a standalone sprite 'globe' as it were, with sprites flying about in their own local coordinate system, then moving that standalone 'globe' to a point above the obj file. Could someone help me get started with this? (I guess it comes down to, how do you position one object relative to another in threejs?)

You should be able to simply attach the spinnning sprites to using the add() function:
//create an empty 'container'/Object3D
var spinningSprites = new Object3D();
//add elements to it:
for(var i = 0 ; i < numSprites; i++) spinningSprites.add(yourParticleObjectInstsance);
//lastly add the whole container to the loaded model:
youLoadedModel.add(spinningSprites);
The above is an example, you would proably use different variable names, etc., but the idea is simple: use add().

Related

How to project (or paste)panorama to model?

Before question,I seached many places, I find some similar idea but without my solution.And my question can be also described as how to recalculate the model's uv to fix the panorama designed for six faces skybox.
Recently,I came upon a unique way to get fluent 3D roaming experience on matterport's Official network https://matterport.com/gallery/
I just want to know how did they do that?Their product is very fluent when swich the panorama picture.
After I roaming many times,I found the secret. I realized that the panorama carrier they use is not box or sphere,but is the object they show first!The evidence is that when switch the point,the object such as chair and table would have their own shadow(one chair have two image one stand up and the other one lie on the floor
With the object in panorama paste on their own correspond object and with depth information the roaming switch become more fluent (As for why they do not use the object directly ,I think because of the limited hardware,Many irregularity faces which get from scanning equipment cannot be use directly
And I want to use this idea in my project ,I have a group of six panorama which can paste on a boxGeometry perfectly,and I just want to paste them on model.but I stuck in project 360 degree.Yes I just find how to project one direction but I cannot project the remaining five.
var _p=BufferGeometry.attributes;
for (var i = 0; i < _p.position.count; i++){
var uvtempbeforeconvert= ( new THREE.Vector3(_p.position.array[3*i],_p.position.array[3*i+1],_p.position.array[3*i+2]) ).clone().applyMatrix4(houseObject.matrixWorld).project(camera1)
//use the worldvertices to get its screen coordinate
if(uvtempbeforeconvert.x<1&&uvtempbeforeconvert.x>-1 && uvtempbeforeconvert.y<1 &&uvtempbeforeconvert.y>-1) {
VerticesArray1.push(_p.position.array[3*i],_p.position.array[3*i+1],_p.position.array[3*i+2]);uvArray1.push(uvtempbeforeconvert.x*0.5+0.5,uvtempbeforeconvert.y*0.5+0.5);
Yes,I success in calculating one direction.BUT I cant deal with the triangle faces which occupy two more view frustum,like a face at the edge of the box.
How should I deal with this problem?Or I run in the FALSE direction at first?Which direction should i run in ?
After asking many people,I just find that I need to usd shadermaterial in threejs ,and use a function named cubetexture,samplercube.With that I can get the pixel color what I need!

Three.js calling clipAction.play() makes animated objects vanish

In Three.js, Calling action.play() makes objects just vanish, without any error or warning on the console.
I use THREE.ObjectLoader to load a JSON file created in blender. The srt (position/scale/quaternion) animation is in the generated file. As are the morphtargets. To optimise filesize I animated the srt as a series of null objects. The morphtargets tracks are in the main object, which I clone 5 times to build the characters (balloons to be exact).
I previously did extensive testing to introduce shape/morph animation. After being succesfull I finalised all the animations. Only to be trumped by the disappearing models. The srt (position/scale/quaternion) animation was working fine before. But after refactoring the code, to be less spagettied, upon calling action.play(). The objects just vanish, exactly then. Echoeing the mixers and the array containing the clips, everything looks correct (ie I see the tracks, the names are right etc). Also examining the newly generated JSON, it seems the same and correct (also I have not changed the SRT animations, only introduced shapeanimation)
So I am lost, and think this looks more and more like a bug. From previous experience I do know it works (or has worked).
I created a jsfiddle: https://jsfiddle.net/oompol/3ya6sqed/
[edit] I turned on the action.play and call the function from the link in the div [/edit] please note I commented out calling action.play(). So you see the load and init work. See the function listed below
function playScene(scene) {
for (parentName in srtMixers) {
var clpName = "balloon1_fly";
var clp = THREE.AnimationClip.findByName(animLib, clpName);
var action = srtMixers[parentName].clipAction(clp);
action.clampWhenFinished = true;
console.log("playScene:", clpName, clp, parentName, srtMixers);
//this is when the problem happens
action.play();
}
}
This is the JSON I am loading:
https://rawgit.com/bakajin/2e3d2f6a722103ed4aefd76f6250ec08/raw/28cad35c20060d478499c0cd40a2753611993720/oomp-scene_balloons-oomp-6.9.4.json
Ok,
there was something very wrong with the scaling indeed.
The io_three JSON exporter for Blender (r87 dev) writes incorrect matrix transformation data in the geometry object (really tiny scaling values). The animation track with the scaling keys were correctly written as 1,1,1. So all the objects just scaled out of view immediately.
Hard to see because the geometry has no separate scaling value but a matrix. Seems to happen when you set "Scene" to true on export.
Worked around the problem by entering the scaling value in the keyframe tracks. But this will only work if you have no scaling animation (so the keys are all one).
Meanwhile I have extensively edited the JSON by hand. Because this is not the only incorrect data. The formatting of the animation object is also wrong. The durations for the morphTargetInfluence Keys is also incorrect. The formatting of these keys is also not always correct.
Hope this helps some other ppl

argon-aframe move with the users geolocation

I have this project:
my codepen
I want to be able to move forward when the user walks, so it feels like they are walking thru the floor plan in VR as they are in real life.
my goal is get the geolocation of the user and show them the room matching theirs location and have them walk around the room while viewing the AR on the phone they would see paintings on the walls.
my challenges are:
walk in real life and move in VR (right now I have it auto walking forward in the meantime)
var speed = 0.0;
var iMoving = false;
var velocityDelta;
AFRAME.registerComponent("automove-controls", {
init: function() {
this.speed = 0.1;
this.isMoving = true;
this.velocityDelta = new THREE.Vector3();
},
isVelocityActive: function() {
return this.isMoving;
},
getVelocityDelta: function() {
this.velocityDelta.z = this.isMoving ? -this.speed : 0;
return this.velocityDelta.clone();
}
});
capture the user geo location so the moment they open the site they are placed relative to their location on the floor plan
this is my first attempt so any feed back would be appreciated.
As far as i know argon.js is more about geoposition than spatial/marker based augmented reality.
moreover It's quite worrying, that their repo for aframe was not touched for a while.
Argon seems like a library for creating scenes in certain points around the user, even their examples base on positioning stuff around, reason being the GPS/phone accelerometers are way too bad to provide useful data for providing spatial positioning. Thats why VIVE needs two towers, and other devices at least a camera/IR device, to get information about the HMD device.
Positioning the person inside a point depending where are they in a room is quite a difficult task, You would need to get a point of reference and position the user accordingly. It seems impossible, since the user can be anywhere in the world.
I would try to do this using jerome-etienne's marker based AR.js. The markers would be the points of reference You need, and although image processing seems like a difficult task, AR.js is surprisingly stable with multiple markers, which help in creating complex scenes.
The markers seems like a good idea, for they can help You with the positioning, moreover simple scenes have no problem with achieving 60+fps, making the experience quite comfortable.
I would start there, since AR.js seems to be updated frequently.

Blender MakeHuman to Three.js

I'm trying to integrate a animated 3D character in a Web navigator.
I use MakeHuman 1.02 to create a character which I import in Blender 2.74 in .mhx format.
I retarget to a BVH using the MakeWalk plugin for Blender. It's for the motion.
When I try to export the character in .json format (three.js), the following error appears :
MakeHuman is not a valid mesh object.
A mesh object is an object that we can modify properties or vertices, isn't it ?
I try others format like .dae format (collada) but it seems that the navigators doesn't find the skeleton and the textures of the character (even if they are in the same directory) necessary for the character's motion.
How to get the character like a mesh object ? Or somebody knows another process to success ?
Like Erica pointed out, you need to have a mesh selected to export it. The problem with this is it doesn't seem to work if you have multiple meshes. Only one will export. This is a problem when using MakeHuman because their clothes are separate meshes.
One way to fix this is to select all meshes and combine them into one (I believe that's CTRL + J). However, you'd have to somehow merge all your texture files into one big one and I have no idea how to do that.
What I do is to export the entire scene. Then it doesn't matter what is selected. All meshes get exported. You can load it using either the ColladaLoader, which I would recommend since you're retargeting to a BVH (worked great for me), or the new ObjectLoader.
If you have your own Scene object on the page that you want to use, you can still load the scene created by the exporter, traverse it to get the items you care about, and add those items to your scene that will display on the page.

How to generate texture mapping images?

I want to put/wrap images to 3D objects. To keep things simple and fast, instead of using(and learning) a 3D library I want to use mapping images. Mapping images are used in such a way:
So you generate the mapping images once for each object and use the same mapping for all images you want to wrap.
My question is how can I generate such mapping images (given the 3D model)? Since I don't know about the terminology my searches failed me. Sorry if I am using the wrong jargon.
Below you can see a description of the workflow.
I have the 3D model of the object and the input image, i want to generate mapping images that I can use to generate the textured image.
I don't even know where to start, any pointers are appreciated.
More info
My initial idea was to somehow wrap a identity mappings (see below) using an external program. I have generated horizontal and vertical gradient images in Photoshop just to see if mapping works using photoshop generated images. The result doesn't look good. I wasn't hopeful but it was worth a shot.
input
mappings (x and y), they just resize the image, they don't do anything fancy.
result
as you can see there are lots of artifacts. Custom mapping images I have generated by warping the gradients even looks worse.
Here is some more information on mappings: http://www.imagemagick.org/Usage/mapping/#distortion_maps
I am using OpenCV remap() function for mapping.
if i understand you right here, you want to do all of it in 2D ?
calling warpPerspective() for each of your cube surfaces will be much more successful, than using remap()
pseudocode outline:
// for each surface:
// get the desired src and dst polygon
// the src one is your texture-image, so that's:
vector<Point> p_src(4), p_dst(4);
p_src[0] = Point(0,0);
p_src[1] = Point(0,src.rows-1);
p_src[2] = Point(src.cols-1,0);
p_src[3] = Point(src.cols-1,src.rows-1);
// the dst poly is the one you want textured, a 3d->2d projection of the cube surface.
// sorry, you've got to do that on your own ;(
// let's say, you've come up with this for the cube - top:
p_dst[0] = Point(15,15);
p_dst[1] = Point(44,19);
p_dst[2] = Point(56,30);
p_dst[3] = Point(33,44);
// now you need the projection matrix to transform from one to another:
Mat proj = getPerspectiveTransform( p_src, p_dst );
// finally, you can warp your texture to the dst-polygon:
warpPerspective(src, dst, proj, dst.size());
if you can get hold of the 'Learning Opencv' book, it's described around p 170.
final word of warning, since youre complaining about artefacts, - yes, it'll all look pretty cheesy, 'real' 3d engines do a lot of work here, subpixel-uv mapping, filtering,
mipmapping, etc. if you want it to look nice, consider using the 'real' thing.
btw, there's nice opengl support built into opencv
To achieve what you are trying to do, you need to render the 3D-models UV to a texture. It will be easier to learn to render 3D than to do things this way. Especially since there are a lot of weaknesses in your aproach. difficult to to lighting and problems til the depth-buffer will be abundant.
Assuming all your objects shul ever only be viewed from one angle, you need to render each of them to 3 textures:
UV-map
Normal-map
Depth-map (to correct the depth-buffer)
You will still have to do shading in order to draw these to look like your object, and I don't even know how to do the depth-buffer-thing, I just know it can be done.
So in order to avoid learning 3D, your will have to learn all the difficult parts of 3D-rendering. Does not seem the easier route...

Resources