What do the coordinates on the Sketchup Ruby API mean? - ruby

I'm very new to the Sketchup API and programming in general so sorry if this is a very basic question.
I tried clicking on a cuboid I drew and inputted this code to get the coordinates of the bounding box:
model = Sketchup.active_model
model_bb = model.bounds
However, sketchup returns this:
#<Geom::BoundingBox:0x0000005063c360>
What does this mean and how can I turn these into x,y,z coordinates that I can work with?
Thanks.

#<Geom::BoundingBox:0x0000005063c360>
What does this mean[?]
Its the object being returned by model.bounds and being set to model_bb. Whilst working with Ruby via the console it will echo back the last returned result.
how can I turn these into x,y,z coordinates that I can work with?
You can retrieve each of the 8 Point3d corners of the Bounding Box with its corner(corner_index) method like so
points = (0..7).map { |n| model_bb.corner(n) }
You can find out more information by reading the SketchUp Ruby API Documentation

Related

Can't match coordinates of NWD and POTREE on Forge Viewer

I'm testing the example https://github.com/petrbroz/forge-potree-demo trying to import a NWD and point cloud into the Forge Viewer but can't make them match the coordinates.
I confirmed on Navisworks that both PC and model are aligned at 0,0,0. But they are displaced more than 200' in the Viewer.
What we did tried
Point cloud was converter using Potree converted 1.6 and 1.7 https://github.com/potree/PotreeConverter/releases/tag/1.7
The point cloud is scaled from meters to feet using let scale = new THREE.Vector3(3.28084,3.28084,3.28084);.
The models are being inserted with a global offset of 0,0,0. globalOffset: {x:0,y:0,z:0}
We created a sphere at a know point directly in Threejs using coordinates in foots and it matches the location on the NWD.
We found that potree converter is not 100% compatible with the sample, a quick fix is needed on the line 464 to read categories created by the converter.
var pointAttributeName = pointAttributes[i].name.toUpperCase().replaceAll(' ','_');
Also we needed to add a new category around line 434
PointAttribute.RGBA = new PointAttribute(PointAttributeNames.COLOR_PACKED, PointAttributeTypes.DATA_TYPE_INT8, 4);
Without those two latest fixes we can't load the point cloud.
What are we missing?
We can override the location by hand but the final coordinates doesn't make sense with the project. Too arbitrary. Could the two fixes be the problem? What should be an alternative?
We found the problem, potree was using a bounding box based on absolute units from the actual point cloud bounding box.
This makes sense if you want the point cloud to be on the center of the scene. But not at our case.
So, if you override this feature by changing potree.js bounding box set from
boundingBox.min.sub(offset);
boundingBox.max.sub(offset);
tightBoundingBox.min.sub(offset);
tightBoundingBox.max.sub(offset);
to
tightBoundingBox.min.sub(offset);
tightBoundingBox.max.sub(offset);
boundingBox.copy(tightBoundingBox);
the point cloud will align with the model.
FYI: The fixes we made where actually making us able to run potree converter 1.7 instead of 1.6 as specified in the demo readme.

How to project (or paste)panorama to model?

Before question,I seached many places, I find some similar idea but without my solution.And my question can be also described as how to recalculate the model's uv to fix the panorama designed for six faces skybox.
Recently,I came upon a unique way to get fluent 3D roaming experience on matterport's Official network https://matterport.com/gallery/
I just want to know how did they do that?Their product is very fluent when swich the panorama picture.
After I roaming many times,I found the secret. I realized that the panorama carrier they use is not box or sphere,but is the object they show first!The evidence is that when switch the point,the object such as chair and table would have their own shadow(one chair have two image one stand up and the other one lie on the floor
With the object in panorama paste on their own correspond object and with depth information the roaming switch become more fluent (As for why they do not use the object directly ,I think because of the limited hardware,Many irregularity faces which get from scanning equipment cannot be use directly
And I want to use this idea in my project ,I have a group of six panorama which can paste on a boxGeometry perfectly,and I just want to paste them on model.but I stuck in project 360 degree.Yes I just find how to project one direction but I cannot project the remaining five.
var _p=BufferGeometry.attributes;
for (var i = 0; i < _p.position.count; i++){
var uvtempbeforeconvert= ( new THREE.Vector3(_p.position.array[3*i],_p.position.array[3*i+1],_p.position.array[3*i+2]) ).clone().applyMatrix4(houseObject.matrixWorld).project(camera1)
//use the worldvertices to get its screen coordinate
if(uvtempbeforeconvert.x<1&&uvtempbeforeconvert.x>-1 && uvtempbeforeconvert.y<1 &&uvtempbeforeconvert.y>-1) {
VerticesArray1.push(_p.position.array[3*i],_p.position.array[3*i+1],_p.position.array[3*i+2]);uvArray1.push(uvtempbeforeconvert.x*0.5+0.5,uvtempbeforeconvert.y*0.5+0.5);
Yes,I success in calculating one direction.BUT I cant deal with the triangle faces which occupy two more view frustum,like a face at the edge of the box.
How should I deal with this problem?Or I run in the FALSE direction at first?Which direction should i run in ?
After asking many people,I just find that I need to usd shadermaterial in threejs ,and use a function named cubetexture,samplercube.With that I can get the pixel color what I need!

Paper.js, force Cartesian coordinate system with origin at center

Using paper.js and loving it. I want to use the Cartesian coordinate system though, as opposed to the "document"-style system that is default.
When I draw on a canvas with plain js (without paper.js), I enforce the transformation like this (on $(document).ready()):
ctx = canvas.getContext("2d");
ctx.scale(1, -1);
ctx.translate(0, (-canvas.height/2));
But with the way paper.js runs in the browser, this doesn't work.
The view object in paper has a method transform(), yet the following code throws an error, even though the view object is clearly available by the output to the console:
console.log(view);
view.transform( new Matrix(1,0,0,-1,300,300) );
// console output:
// CanvasView {_context: CanvasRenderingContext2D, _eventCou......
// Uncaught TypeError: view.transform is not a function
I'm thinking I need to call transform upon some document load event? But I don't see any straightforward events such that in paper's API.
Is there a simple way to just simply apply a linear transformation to the whole view once, in the beginning, to get a Cartesian system?
Figured out the following works:
project.activeLayer.transform( new Matrix(1,0,0,-1,view.center.x, view.center.y) );

Sketchup Ruby InputPoint.face method on Image object?

I have one problem with InputPoint.face method in Sketchup Ruby API.
When i import one image object and then draw 5-edges polygon on this image. I used InputPoint.face for getting number of edges in the polygon after clicking on it. I think the output is 5, but actually, the output is 4
If i remove the image , result will be 5.
I don't understand why the output is like that, and what can i do to get output 5?
This is my code
# The onLButtonDOwn method is called when the user presses the left mouse button.
def onLButtonDown(flags, x, y, view)
#ip = view.inputpoint x,y
#f = #ip.face
aEdges = #f.edges
puts aEdges.length
end
Thanks you
So you have drawn pentagon faces on the Image entity you imported? And when you use InputPoint to click on one of the pentagons you get a face with four edges?
What happens here is probably that you are getting the Face inside the Image entity. Under the hood an Image entity is a special component instance. You can in fact find the definition for the Image entity in model.definitions.
For more details on SketchUp and Components, Groups and Images read this article: http://www.thomthom.net/thoughts/2012/02/definitions-and-instances-in-sketchup/
InputPoint and PickHelper seem to let you pick the face inside the Image entity instead of stopping when it hits the Image entity.
You will probably want to filter out your results before using them and you probably wantto use the PickHelper class instead of InputPoint.
http://www.sketchup.com/intl/en/developer/docs/ourdoc/pickhelper.php
http://www.thomthom.net/thoughts/2013/01/pickhelper-a-visual-guide/
InputPoint is more for getting 3d points for inference, while PickHelper is best to use to pick and select entities.
You can check the Face entity you get from InputPoint and it's parent to verify which face it is and what context it belongs to.

How to generate texture mapping images?

I want to put/wrap images to 3D objects. To keep things simple and fast, instead of using(and learning) a 3D library I want to use mapping images. Mapping images are used in such a way:
So you generate the mapping images once for each object and use the same mapping for all images you want to wrap.
My question is how can I generate such mapping images (given the 3D model)? Since I don't know about the terminology my searches failed me. Sorry if I am using the wrong jargon.
Below you can see a description of the workflow.
I have the 3D model of the object and the input image, i want to generate mapping images that I can use to generate the textured image.
I don't even know where to start, any pointers are appreciated.
More info
My initial idea was to somehow wrap a identity mappings (see below) using an external program. I have generated horizontal and vertical gradient images in Photoshop just to see if mapping works using photoshop generated images. The result doesn't look good. I wasn't hopeful but it was worth a shot.
input
mappings (x and y), they just resize the image, they don't do anything fancy.
result
as you can see there are lots of artifacts. Custom mapping images I have generated by warping the gradients even looks worse.
Here is some more information on mappings: http://www.imagemagick.org/Usage/mapping/#distortion_maps
I am using OpenCV remap() function for mapping.
if i understand you right here, you want to do all of it in 2D ?
calling warpPerspective() for each of your cube surfaces will be much more successful, than using remap()
pseudocode outline:
// for each surface:
// get the desired src and dst polygon
// the src one is your texture-image, so that's:
vector<Point> p_src(4), p_dst(4);
p_src[0] = Point(0,0);
p_src[1] = Point(0,src.rows-1);
p_src[2] = Point(src.cols-1,0);
p_src[3] = Point(src.cols-1,src.rows-1);
// the dst poly is the one you want textured, a 3d->2d projection of the cube surface.
// sorry, you've got to do that on your own ;(
// let's say, you've come up with this for the cube - top:
p_dst[0] = Point(15,15);
p_dst[1] = Point(44,19);
p_dst[2] = Point(56,30);
p_dst[3] = Point(33,44);
// now you need the projection matrix to transform from one to another:
Mat proj = getPerspectiveTransform( p_src, p_dst );
// finally, you can warp your texture to the dst-polygon:
warpPerspective(src, dst, proj, dst.size());
if you can get hold of the 'Learning Opencv' book, it's described around p 170.
final word of warning, since youre complaining about artefacts, - yes, it'll all look pretty cheesy, 'real' 3d engines do a lot of work here, subpixel-uv mapping, filtering,
mipmapping, etc. if you want it to look nice, consider using the 'real' thing.
btw, there's nice opengl support built into opencv
To achieve what you are trying to do, you need to render the 3D-models UV to a texture. It will be easier to learn to render 3D than to do things this way. Especially since there are a lot of weaknesses in your aproach. difficult to to lighting and problems til the depth-buffer will be abundant.
Assuming all your objects shul ever only be viewed from one angle, you need to render each of them to 3 textures:
UV-map
Normal-map
Depth-map (to correct the depth-buffer)
You will still have to do shading in order to draw these to look like your object, and I don't even know how to do the depth-buffer-thing, I just know it can be done.
So in order to avoid learning 3D, your will have to learn all the difficult parts of 3D-rendering. Does not seem the easier route...

Resources