i have a question, how can i get a cooridinates (x,y,z) in ruby console of an object (component)? I need this cooridnates for send this coordinates to other object. Thanks.
"A coordinate" is a little bit ambiguous, depending if you want a point from the bounding box, the insertion point or a vertex inside the component.
But a simple generic example would be:
# Assuming user has selected a ComponentInstance:
instance = Sketchup.active_model.selection[0]
puts instance.transformation.origin
ComponentInstance.transformationSketchUp 6.0+
The transformationmethod is used to retrieve the transformation of this instance.
http://www.sketchup.com/intl/en/developer/docs/ourdoc/componentinstance.php#transformation
Transformation.originSketchUp 6.0+
The origin method retrieves the origin of a rigid transformation.
http://www.sketchup.com/intl/en/developer/docs/ourdoc/transformation.php#origin
Related
I am trying to create a PPP in spatstat using my study area (a large polygon made up of individual polygons) from a shape file from GIS.
I have been following: Handling shapeles in the spatstat package
Adrian Baddeley, Rolf Turner and Ege Rubak
2022-11-08
spatstat version 3.0-2
#load packages
install.packages("spatstat")
library(spatstat)
install.packages("maptools")
library(maptools) #will get warning message about rgdal instead, stick with maptools
install.packages(sp)
library(sp)
#import shapefile
UMshape1<-readShapeSpatial('F:/GIS/export_shape/Clipped_urban_matrix.shp')
#check class
class(UMshape1)
#returned: [1] "SpatialPolygonsDataFrame
#following code from guidance to convert Objects of class SpatialPolygonsDataFrame
UM1 <- as(UMshape1, "SpatialPolygons")
UM1regions <- slot(UM1, "polygons") UM1regions <- lapply(UM1regions, function(x) { SpatialPolygons(list(x)) }) UM1windows <- lapply(UM1regions, as.owin)
#checked class of each of these file types
class(UM1)
#"SpatialPolygons"
class(UM1regions)
#"list"
class(UM1windows)
"list"
#from guidance 'The result is a list of objects of class owin. Often it would make sense to convert this to a tessellation object, by typing':
#so I enter the code for my data
teUM1 <-tess(tiles = UM1windows)
This last command (tess) has now been running for 48 hours (red stop box). I did not created a progress bar.
Is this the right thing to do so that I can then created my owin study area? So that I can then create a PPP in spatstat?
If the desired result is a single window of class owin to use as the window for your point pattern, then you don't need a tessellation. Instead of teUM1 <-tess(tiles = UM1windows) you should probably do teWin <- union.owin(as.solist(UM1windows)).
If you do really need a tessellation (which would keep each of the windows separate for further use) then you could call tess(tiles=UM1windows, check=FALSE). The long computation time is caused by the fact that the code is checking whether each window overlaps any of the other windows. This check is disabled if you set check=FALSE.
I am using Gensim's Doc2Vec, and was wondering if there is a way to get the most similar document to another document that is outside the list of TaggedDocuments used to train the Doc2Vec model.
Right now I can infer a vector from a document not in the training set:
# 'model' here is a instance of Doc2Vec class that has been trained
# Inferring a vector
doc_not_in_training_set = "Foo Foo Foo Foo Foo Foo Fie"
v1 = model.infer_vector(word_tokenize(doc_not_in_training_set.lower()))
print("V1_infer", v1)
This prints out a vector representation of the 'doc_not_in_training_set' string. However, is there a way to use this vector to find the n most similar documents to the 'doc_not_in_training_set' string (in the TaggedDocuments training set for this word2vec model)?
Looking under the documentation, the closest I could find was the model.docvec.most_similar() method:
# Finding most similar to first
similar_doc = model.docvecs.most_similar('0')
This returns the document in the training set most similar to the document in the training set with tag '0'.
In the documentation of this method, it looks like there is not yet the functionality I am looking for:
TODO: Accept vectors of out-of-training-set docs, as if from inference.
Is there another method I can use to find documents similar to a document not in the training set?
The .most_similar() method will also take a raw vectors as the target position.
It helps to explicitly name the positive parameter, to prevent other logic of that method, which tries to intuit what other strings/etc supplied as arguments might mean, from misinterpreting a single raw vector.
So try:
similar_docs = model.docvecs.most_similar(positive=[v1])
You should get back a list of nearest-neighbors to the v1 vector that you'd previously inferred.
I'm trying to move an object called "car" via the dat.gui. If the user changes the x value using the dat.gui slider, the car should move along the x-axis of its local coordinate system.
here I have copied the part of the code that is causing me problems:
var m = new THREE.Vector3;
m.copy(car.position);
if (changed.key=='X') car.translateX(changed.value-car.worldToLocal(m).x);
My problem is that the expression in car.translateX always evaluates to the value that is in changed.value. The part after the minus has no effect at all or maybe is permanently 0. I have printed the values with console.log and the values of car.position.x and m change in each step, but the subtraction still delivers in every step only the result that is already in changed.value anyway. Can someone help me and tell me why this happens?
Unfortunately, I am absolutely stuck.
car.worldToLocal(m)
I'm afraid this piece of code makes no sense since car.position (and thus m) already represents the car's position in local space.
Instead of using translateX() you achieve the same result by modifying car.position directly:
car.position.x = changed.value;
I am working with some data which specifies an installation path, in another data source I have the location of events based on their lat/long location.
The installation location contained in the oracle attribute SDO_ORDINATE_ARRAY does not match any X/Y geographic coordinate system I am familiar with (Lat/Long or UTM). Is there a way to figure out what the data type is that is stored in the SDO_ORDINATE_ARRAY?
Here is an example of the data for a path with 3 (x,y) points:
MDSYS.SDO_GEOMETRY(2002,1026911,NULL,
MDSYS.SDO_ELEM_INFO_ARRAY(1,2,1),
MDSYS.SDO_ORDINATE_ARRAY(
1352633.64991299994289875030517578125,
12347411.6615570001304149627685546875,
1352638.02988700009882450103759765625,
12347479.02890899963676929473876953125,
1352904.06293900008313357830047607421875,
12347470.76137300021946430206298828125,
))
The above should be roughly within the proximity to 33.9845° N, 117.5159° W, and I went through various conversions but could not find anything that led me anywhere close to the above.
I read through the documentation on SDO_GEOMETRY from the oracle page and did not find any help in figuring out what the data type is.
https://docs.oracle.com/database/121/SPATL/sdo_geometry-object-type.htm#SPATL494
Alternatively, if there is a way I can type in the lat/long somewhere to see all of the different coordinate types which are equivalent, I might also be able to figure out which format this is.
Looks like there is a typo inside MDSYS.SDO_GEOMETRY(2002,1026911,NULL,
1026911 is supposed to be a SRS - Spatial Reference System.
If we remove the first 1 we have 102691, and that is a very well known SRS code.
ESRI:102691 for NAD 1983 for StatePlane Minnesota North FIPS 2201 Feet
The corresponding WKT gives you all the necessary information to perform any coordinate conversion:
PROJCS["NAD_1983_StatePlane_Minnesota_North_FIPS_2201_Feet",
GEOGCS["GCS_North_American_1983",
DATUM["North_American_Datum_1983",
SPHEROID["GRS_1980",6378137,298.257222101]],
PRIMEM["Greenwich",0],
UNIT["Degree",0.017453292519943295]],
PROJECTION["Lambert_Conformal_Conic_2SP"],
PARAMETER["False_Easting",2624666.666666666],
PARAMETER["False_Northing",328083.3333333333],
PARAMETER["Central_Meridian",-93.09999999999999],
PARAMETER["Standard_Parallel_1",47.03333333333333],
PARAMETER["Standard_Parallel_2",48.63333333333333],
PARAMETER["Latitude_Of_Origin",46.5],
UNIT["Foot_US",0.30480060960121924],
AUTHORITY["EPSG","102691"]]
I am confused by parameters of those functions related to coordinate systems, for eample:
TangoSupport_getMatrixTransformAtTime(double timestamp,
TangoCoordinateFrameType base_frame,
TangoCoordinateFrameType target_frame,
TangoSupportEngineType base_engine,
TangoSupportEngineType target_engine,
TangoSupportDisplayRotation display_rotation_type,
TangoMatrixTransformData *matrix_transform)
(1)Base_engine: If I choose COORDINATE_FRAME_START_OF_SERVICE as base_frame . As described in the document, the coordinate system will use "Right Hand Local Level" . Then, what's the purpose of the base_engine parameter ? Is it meaningful here to choose something other than TANGO_SUPPORT_ENGINE_TANGO ?
(2) Target_engine: I choose COORDINATE_FRAME_START_OF_SERVICE as base_frame , and DEVICE as target. choose OPENGL for base_engine. then choose any value for target_engine. the result is always same
(1)Base_engine: If I choose COORDINATE_FRAME_START_OF_SERVICE as base_frame . As described in the document, the coordinate system will use "Right Hand Local Level" . Then, what's the purpose of the base_engine parameter ? Is it meaningful here to choose something other than TANGO_SUPPORT_ENGINE_TANGO ?
This really depends on your use case. It is rare that you use Tango coordinate as base frame, unless you have another set of transformation that transform start service to local origin.
Let's say you did a query like this: TangoSupport_getMatrixTransformAtTime(0.0, START_SERVICE, DEVICE, TANGO, TANGO,...); it is quavalant of doing a TangoService_getPoseAtTime query with start service and device frame pair.
More common case is that you want to transform something(i.e depth point) in to your local origin (i.e OpenGL origin) for render. What you will do is: TangoSupport_getMatrixTransformAtTime(0.0, START_SERVICE, DEPTH, OPENGL, TANGO,...);, the result of this call is opengl_T_depth_camera, you can then multiply this transform to the depth point returned from depth camera: P_opengl = opengl_T_depth_camera * P_depth_camera;. P_opengl is the point you can render out directly in OpenGL.
(2) Target_engine: I choose COORDINATE_FRAME_START_OF_SERVICE as base_frame , and DEVICE as target. choose OPENGL for base_engine. then choose any value for target_engine. the result is always same
This should be true for OPENGL and TANGO. There's a happy coincedent that opengl coordinate is same as the device frame coordinate. So if you put TANGO or OPENGL on the target_frame, the result will be the same. But if you put UNITY as target engine type, the result will be different.