I can't find a way to get the actual camera direction neither in first person mode nor in third person: the vertical element of the data is missing, it's just constantly shows zero. I'v tried to get the camera data using PhysicsCast.instance.getRayFromCamera() and also by Camera.instance.rotation
None of them work. Also I found an older project, this one
https://github.com/NoahWarnke/DCLBlocks and within the new decentraland version it also doesn't work as it's supposed to be working. Is there a way to get the actual camera direction, pointer direction in the new Decentraland version?
Related
I have the Player move around and when he enters a new Room (via Instancing) his Sprite shows him facing in the Default direction (in my Case down). So If you enter a Room from any other direction then it looks weird, cause for a short Moment you can see the Player facing down even if you came from the right. How can I tell Godot to set the Player Sprite to a specific Frame in Code, so I can set it to the proper Frame for each Direction. I'm new to Godot and I used HeartBeast Action RPG Tutorial for my Movement. So it's using an AnimationTree and AnimationPlayer. I tried "set_frame" but Godot just says it doesn't know the Method.
If you are following the tutorial series I think you are following (Godot Action RPG)… You are using an AnimationTree with AnimationNodeBlendSpace2D (BlendSpace2D).
The BlendSpace2D picks an animation based on an input vector "blend_position". This way you can use BlendSpace2D to pick an animation based on the direction of motion or the direction the player character is looking at. For example, you can "idle_up", "idle_down", "idle_left", and "idle_right" animations, and use BlendSpace2D to pick one in runtime based on a direction vector.
Thus, you need to set the "blend_position" of the BlendSpace2D like this:
animationTree.set("parameters/NameOfTheBlendSpàce2D/blend_position", vector)
Where:
animationTree is a variable set to the AnimationTree.
"NameOfTheBlendSpàce2D" is the name of the BlendSpace2D you want to set (e.g. "Idle").
vector is a Vector2D with the direction you want (e.g. Vector2.UP).
This is shown in the episode 6 of the tutorial series (Animation in all directions with an AnimationTree).
You can find a reference project by HeartBeast at arpg-reference, where you can find a function update_animation_blend_positions that looks like this:
func update_animation_blend_positions():
animationTree.set("parameters/Idle/blend_position", input_vector)
animationTree.set("parameters/Run/blend_position", input_vector)
animationTree.set("parameters/Attack/blend_position", input_vector)
animationTree.set("parameters/Roll/blend_position", input_vector)
Here "Idle", "Run", "Attack", and "Roll" are BlendSpace2D, each configured with animations for the corresponding actions, and this function updates them in sync so that they are picking the correct animation.
As far as I can tell the code from the repository is further refactored from what is show in the tutorial series. This code from the repository is under MIT licence.
I want my camera to follow my player but when i get his position in other script it just falls outThis is the image of the script the script in the inspectorPlayerCameraPos
PLS HELPP
I only know that if i disable the script nothing happens the camera rotates and it stay in its position
Your plan should work, as long as:
The CameraPos script is attached to your MainCamera. (make sure!)
The "orientation" transform is set to the CameraPos object's transform.(it is)
The CameraPos object is a child of the Player (it is)
Nothing else is interfering with your MainCamera's position. (make sure!)
The code you've shown is fine – my guess is that if it's a code error, it's in a class that you haven't shown us – especially considering you mentioned your camera was rotating, and the class you screenshotted only applied a position.
Rotation can be really tricky — if you tried the four things I suggested without luck, try sending over your rotation code and we'll see if the problem is there.
I'm testing the example https://github.com/petrbroz/forge-potree-demo trying to import a NWD and point cloud into the Forge Viewer but can't make them match the coordinates.
I confirmed on Navisworks that both PC and model are aligned at 0,0,0. But they are displaced more than 200' in the Viewer.
What we did tried
Point cloud was converter using Potree converted 1.6 and 1.7 https://github.com/potree/PotreeConverter/releases/tag/1.7
The point cloud is scaled from meters to feet using let scale = new THREE.Vector3(3.28084,3.28084,3.28084);.
The models are being inserted with a global offset of 0,0,0. globalOffset: {x:0,y:0,z:0}
We created a sphere at a know point directly in Threejs using coordinates in foots and it matches the location on the NWD.
We found that potree converter is not 100% compatible with the sample, a quick fix is needed on the line 464 to read categories created by the converter.
var pointAttributeName = pointAttributes[i].name.toUpperCase().replaceAll(' ','_');
Also we needed to add a new category around line 434
PointAttribute.RGBA = new PointAttribute(PointAttributeNames.COLOR_PACKED, PointAttributeTypes.DATA_TYPE_INT8, 4);
Without those two latest fixes we can't load the point cloud.
What are we missing?
We can override the location by hand but the final coordinates doesn't make sense with the project. Too arbitrary. Could the two fixes be the problem? What should be an alternative?
We found the problem, potree was using a bounding box based on absolute units from the actual point cloud bounding box.
This makes sense if you want the point cloud to be on the center of the scene. But not at our case.
So, if you override this feature by changing potree.js bounding box set from
boundingBox.min.sub(offset);
boundingBox.max.sub(offset);
tightBoundingBox.min.sub(offset);
tightBoundingBox.max.sub(offset);
to
tightBoundingBox.min.sub(offset);
tightBoundingBox.max.sub(offset);
boundingBox.copy(tightBoundingBox);
the point cloud will align with the model.
FYI: The fixes we made where actually making us able to run potree converter 1.7 instead of 1.6 as specified in the demo readme.
Before question,I seached many places, I find some similar idea but without my solution.And my question can be also described as how to recalculate the model's uv to fix the panorama designed for six faces skybox.
Recently,I came upon a unique way to get fluent 3D roaming experience on matterport's Official network https://matterport.com/gallery/
I just want to know how did they do that?Their product is very fluent when swich the panorama picture.
After I roaming many times,I found the secret. I realized that the panorama carrier they use is not box or sphere,but is the object they show first!The evidence is that when switch the point,the object such as chair and table would have their own shadow(one chair have two image one stand up and the other one lie on the floor
With the object in panorama paste on their own correspond object and with depth information the roaming switch become more fluent (As for why they do not use the object directly ,I think because of the limited hardware,Many irregularity faces which get from scanning equipment cannot be use directly
And I want to use this idea in my project ,I have a group of six panorama which can paste on a boxGeometry perfectly,and I just want to paste them on model.but I stuck in project 360 degree.Yes I just find how to project one direction but I cannot project the remaining five.
var _p=BufferGeometry.attributes;
for (var i = 0; i < _p.position.count; i++){
var uvtempbeforeconvert= ( new THREE.Vector3(_p.position.array[3*i],_p.position.array[3*i+1],_p.position.array[3*i+2]) ).clone().applyMatrix4(houseObject.matrixWorld).project(camera1)
//use the worldvertices to get its screen coordinate
if(uvtempbeforeconvert.x<1&&uvtempbeforeconvert.x>-1 && uvtempbeforeconvert.y<1 &&uvtempbeforeconvert.y>-1) {
VerticesArray1.push(_p.position.array[3*i],_p.position.array[3*i+1],_p.position.array[3*i+2]);uvArray1.push(uvtempbeforeconvert.x*0.5+0.5,uvtempbeforeconvert.y*0.5+0.5);
Yes,I success in calculating one direction.BUT I cant deal with the triangle faces which occupy two more view frustum,like a face at the edge of the box.
How should I deal with this problem?Or I run in the FALSE direction at first?Which direction should i run in ?
After asking many people,I just find that I need to usd shadermaterial in threejs ,and use a function named cubetexture,samplercube.With that I can get the pixel color what I need!
I am using r82.
I have a mesh with multiple materials. I can change their opacity just fine, but how they are rendered is what I would call "splotchy". I have been using ThreeJs for a while, and EDIT: was able to get the transparency working in a past version (r67) with the same model in a significantly more consistent way. So I was wondering if there is something that now I need to set that I didn't need to set before or if I am just overlooking something. Upon revisiting my older code and testing it again, I found that the same transparency issues were present. It was simply a matter of there not being as obvious "splotches" (and not testing enough, I'm sure). Here is a screenshot.
Here are a few more pictures I took that highlight the issue a bit better. I have the outside wall in a light grey and the floors a dark grey in the model and can toggle the outside walls to be visible or not. In these pictures I have one face of the outside wall purple and a face of the floor in the room on the other side of the wall green.
Based on the angle of the camera, it makes part of the green floor face invisible even though there is only one face between the camera and it.
The materials are all double sided already and there is no sign of this until the transparency is on. I found a similar question that suggested changing the mesh.setFaceCulling (or something similar) but that seemed to be from an older version and wasn't in r82.
Thanks for any help in advance!
EDIT:
I started looking into the old version of threeJS and the current version's source code to see what is done differently regarding transparency. I found transparentObjects, which is an array of the objects (I believe faces) that are going to be rendered and are in sorted based on "reversePainterSortStable". There is another list of objects (I believe for the materials objects, maybe?) called opaqueObjects that uses "painterSortStable". So to see if changing the sort order would change the outcome of how things are looking when transparent I changed it so that transparentObjects got sorted by "painterSortStable" and it did change how things showed up significantly (granted it didn't fix my problem since it just removed some problem spots and created new ones).
So the short version, it looks like it is an issue with the renderOrder of the faces.
That being said, I tried finding how the r67 version of the code handled the "renderOrder" of the faces since it wasn't something that (to my knowledge) could be set in that version and just did it automatically. But I have had no such luck tracking down how it was done as of yet.
So I see two possibilities. 1) find out how the past version correctly did transparency (at least for my purposes) and change the logic in the current version to use that. Or 2) find how to properly set the renderOrder of the faces based on the camera position in the scene. Will look into the second option first, but figured it would be good to document this for others looking to help answer or that have a similar problem.
EDIT 2:
So digging through the source code for threeJs and noticed something about the transparentObjects array I mentioned in the previous edit. The first, that I cannot for the life of me figure out how it gets populated since it doesn't seem like it is added to anywhere in the code. The second is that I think it is being populated with duplicates of the entire building object/mesh (see screenshot below).
The z indexes all seem to be the same. as well as the ids and the objects are all of type "mesh" (of the ones I looked through, granted, since there are a few thousand). So I was going to figure out why its adding what is being added to the array, but that is when I stumbled across the issue of not finding where in the code that the transparentObjects array actually get populated.
EDIT 3:
WestLangley, I tried setting the depth test for the outer wall material to false and got this. Like I said in my response comment, even if it did work it wouldn't fix the issues experienced with the camera inside the building, but wanted to follow up none the less (see snapshot below).