How to project (or paste)panorama to model? - three.js

Before question,I seached many places, I find some similar idea but without my solution.And my question can be also described as how to recalculate the model's uv to fix the panorama designed for six faces skybox.
Recently,I came upon a unique way to get fluent 3D roaming experience on matterport's Official network https://matterport.com/gallery/
I just want to know how did they do that?Their product is very fluent when swich the panorama picture.
After I roaming many times,I found the secret. I realized that the panorama carrier they use is not box or sphere,but is the object they show first!The evidence is that when switch the point,the object such as chair and table would have their own shadow(one chair have two image one stand up and the other one lie on the floor
With the object in panorama paste on their own correspond object and with depth information the roaming switch become more fluent (As for why they do not use the object directly ,I think because of the limited hardware,Many irregularity faces which get from scanning equipment cannot be use directly
And I want to use this idea in my project ,I have a group of six panorama which can paste on a boxGeometry perfectly,and I just want to paste them on model.but I stuck in project 360 degree.Yes I just find how to project one direction but I cannot project the remaining five.
var _p=BufferGeometry.attributes;
for (var i = 0; i < _p.position.count; i++){
var uvtempbeforeconvert= ( new THREE.Vector3(_p.position.array[3*i],_p.position.array[3*i+1],_p.position.array[3*i+2]) ).clone().applyMatrix4(houseObject.matrixWorld).project(camera1)
//use the worldvertices to get its screen coordinate
if(uvtempbeforeconvert.x<1&&uvtempbeforeconvert.x>-1 && uvtempbeforeconvert.y<1 &&uvtempbeforeconvert.y>-1) {
VerticesArray1.push(_p.position.array[3*i],_p.position.array[3*i+1],_p.position.array[3*i+2]);uvArray1.push(uvtempbeforeconvert.x*0.5+0.5,uvtempbeforeconvert.y*0.5+0.5);
Yes,I success in calculating one direction.BUT I cant deal with the triangle faces which occupy two more view frustum,like a face at the edge of the box.
How should I deal with this problem?Or I run in the FALSE direction at first?Which direction should i run in ?

After asking many people,I just find that I need to usd shadermaterial in threejs ,and use a function named cubetexture,samplercube.With that I can get the pixel color what I need!

Related

How to set a Sprite to a specific Frame in Godot

I have the Player move around and when he enters a new Room (via Instancing) his Sprite shows him facing in the Default direction (in my Case down). So If you enter a Room from any other direction then it looks weird, cause for a short Moment you can see the Player facing down even if you came from the right. How can I tell Godot to set the Player Sprite to a specific Frame in Code, so I can set it to the proper Frame for each Direction. I'm new to Godot and I used HeartBeast Action RPG Tutorial for my Movement. So it's using an AnimationTree and AnimationPlayer. I tried "set_frame" but Godot just says it doesn't know the Method.
If you are following the tutorial series I think you are following (Godot Action RPG)… You are using an AnimationTree with AnimationNodeBlendSpace2D (BlendSpace2D).
The BlendSpace2D picks an animation based on an input vector "blend_position". This way you can use BlendSpace2D to pick an animation based on the direction of motion or the direction the player character is looking at. For example, you can "idle_up", "idle_down", "idle_left", and "idle_right" animations, and use BlendSpace2D to pick one in runtime based on a direction vector.
Thus, you need to set the "blend_position" of the BlendSpace2D like this:
animationTree.set("parameters/NameOfTheBlendSpàce2D/blend_position", vector)
Where:
animationTree is a variable set to the AnimationTree.
"NameOfTheBlendSpàce2D" is the name of the BlendSpace2D you want to set (e.g. "Idle").
vector is a Vector2D with the direction you want (e.g. Vector2.UP).
This is shown in the episode 6 of the tutorial series (Animation in all directions with an AnimationTree).
You can find a reference project by HeartBeast at arpg-reference, where you can find a function update_animation_blend_positions that looks like this:
func update_animation_blend_positions():
animationTree.set("parameters/Idle/blend_position", input_vector)
animationTree.set("parameters/Run/blend_position", input_vector)
animationTree.set("parameters/Attack/blend_position", input_vector)
animationTree.set("parameters/Roll/blend_position", input_vector)
Here "Idle", "Run", "Attack", and "Roll" are BlendSpace2D, each configured with animations for the corresponding actions, and this function updates them in sync so that they are picking the correct animation.
As far as I can tell the code from the repository is further refactored from what is show in the tutorial series. This code from the repository is under MIT licence.

argon-aframe move with the users geolocation

I have this project:
my codepen
I want to be able to move forward when the user walks, so it feels like they are walking thru the floor plan in VR as they are in real life.
my goal is get the geolocation of the user and show them the room matching theirs location and have them walk around the room while viewing the AR on the phone they would see paintings on the walls.
my challenges are:
walk in real life and move in VR (right now I have it auto walking forward in the meantime)
var speed = 0.0;
var iMoving = false;
var velocityDelta;
AFRAME.registerComponent("automove-controls", {
init: function() {
this.speed = 0.1;
this.isMoving = true;
this.velocityDelta = new THREE.Vector3();
},
isVelocityActive: function() {
return this.isMoving;
},
getVelocityDelta: function() {
this.velocityDelta.z = this.isMoving ? -this.speed : 0;
return this.velocityDelta.clone();
}
});
capture the user geo location so the moment they open the site they are placed relative to their location on the floor plan
this is my first attempt so any feed back would be appreciated.
As far as i know argon.js is more about geoposition than spatial/marker based augmented reality.
moreover It's quite worrying, that their repo for aframe was not touched for a while.
Argon seems like a library for creating scenes in certain points around the user, even their examples base on positioning stuff around, reason being the GPS/phone accelerometers are way too bad to provide useful data for providing spatial positioning. Thats why VIVE needs two towers, and other devices at least a camera/IR device, to get information about the HMD device.
Positioning the person inside a point depending where are they in a room is quite a difficult task, You would need to get a point of reference and position the user accordingly. It seems impossible, since the user can be anywhere in the world.
I would try to do this using jerome-etienne's marker based AR.js. The markers would be the points of reference You need, and although image processing seems like a difficult task, AR.js is surprisingly stable with multiple markers, which help in creating complex scenes.
The markers seems like a good idea, for they can help You with the positioning, moreover simple scenes have no problem with achieving 60+fps, making the experience quite comfortable.
I would start there, since AR.js seems to be updated frequently.

sf::View visibility Check

Ok, so I've been working on a game with some friends for a school project. We have used SFML.
I've been working lately on the camera and a tile implementations system and now I need them to work together.
The camera works like this:
I have two different sf::View, one for the game world and one for the hud. The one for the game world follows the player in the x-axis.
The tile system works in a way where it reads in a txt file and draw sprites based on the information from the txt file.
As it is now I always draw all the tiles, even if it's outside of the cameras view. Not good. I need a way to check if the tiles are outside of the cameras view before I draw them. How do I do this?
I did find this:
Get X and Y offset of sf::View
but I can't really wrap my head around how to make this info work in my game.
Any help would be really appreciated! :)
Mvh Elis
Managed to figure it out after finding this:
http://fr.sfml-dev.org/forums/index.php?topic=10590.0

3DS Max - How to display bones as floating objects instead of lines beneath a mesh?

So I downloaded a rigged model for 3DS Max and it had something I'd never seen before. Bones as globes and rings floating outside the mesh for easy access and convenience instead of having to constantly access the object hierarchy or switch between layers to make sure everything is animating properly. How do I set up a model like this, or change a model rigged with regular lines between points as bones into a model like this?
Rigged model with bone "helpers?"
I see you are new here, but this website is for purely programming related questions. How to use any particular software is not the focus of this particular website.
I would suggest you take 3dsmax questions to cgtalk.com or perhaps as well to autodesk's own support website. Hm... looks like their old website for discussion forums, www.the-area.com doesn't work anymore. But I found this link: http://forums.autodesk.com/t5/3ds-max/ct-p/area-c1

Sketchup Ruby InputPoint.face method on Image object?

I have one problem with InputPoint.face method in Sketchup Ruby API.
When i import one image object and then draw 5-edges polygon on this image. I used InputPoint.face for getting number of edges in the polygon after clicking on it. I think the output is 5, but actually, the output is 4
If i remove the image , result will be 5.
I don't understand why the output is like that, and what can i do to get output 5?
This is my code
# The onLButtonDOwn method is called when the user presses the left mouse button.
def onLButtonDown(flags, x, y, view)
#ip = view.inputpoint x,y
#f = #ip.face
aEdges = #f.edges
puts aEdges.length
end
Thanks you
So you have drawn pentagon faces on the Image entity you imported? And when you use InputPoint to click on one of the pentagons you get a face with four edges?
What happens here is probably that you are getting the Face inside the Image entity. Under the hood an Image entity is a special component instance. You can in fact find the definition for the Image entity in model.definitions.
For more details on SketchUp and Components, Groups and Images read this article: http://www.thomthom.net/thoughts/2012/02/definitions-and-instances-in-sketchup/
InputPoint and PickHelper seem to let you pick the face inside the Image entity instead of stopping when it hits the Image entity.
You will probably want to filter out your results before using them and you probably wantto use the PickHelper class instead of InputPoint.
http://www.sketchup.com/intl/en/developer/docs/ourdoc/pickhelper.php
http://www.thomthom.net/thoughts/2013/01/pickhelper-a-visual-guide/
InputPoint is more for getting 3d points for inference, while PickHelper is best to use to pick and select entities.
You can check the Face entity you get from InputPoint and it's parent to verify which face it is and what context it belongs to.

Resources