Need help understanding Tango's functions related to coordinate systems - google-project-tango

I am confused by parameters of those functions related to coordinate systems, for eample:
TangoSupport_getMatrixTransformAtTime(double timestamp,
TangoCoordinateFrameType base_frame,
TangoCoordinateFrameType target_frame,
TangoSupportEngineType base_engine,
TangoSupportEngineType target_engine,
TangoSupportDisplayRotation display_rotation_type,
TangoMatrixTransformData *matrix_transform)
(1)Base_engine: If I choose COORDINATE_FRAME_START_OF_SERVICE as base_frame . As described in the document, the coordinate system will use "Right Hand Local Level" . Then, what's the purpose of the base_engine parameter ? Is it meaningful here to choose something other than TANGO_SUPPORT_ENGINE_TANGO ?
(2) Target_engine: I choose COORDINATE_FRAME_START_OF_SERVICE as base_frame , and DEVICE as target. choose OPENGL for base_engine. then choose any value for target_engine. the result is always same

(1)Base_engine: If I choose COORDINATE_FRAME_START_OF_SERVICE as base_frame . As described in the document, the coordinate system will use "Right Hand Local Level" . Then, what's the purpose of the base_engine parameter ? Is it meaningful here to choose something other than TANGO_SUPPORT_ENGINE_TANGO ?
This really depends on your use case. It is rare that you use Tango coordinate as base frame, unless you have another set of transformation that transform start service to local origin.
Let's say you did a query like this: TangoSupport_getMatrixTransformAtTime(0.0, START_SERVICE, DEVICE, TANGO, TANGO,...); it is quavalant of doing a TangoService_getPoseAtTime query with start service and device frame pair.
More common case is that you want to transform something(i.e depth point) in to your local origin (i.e OpenGL origin) for render. What you will do is: TangoSupport_getMatrixTransformAtTime(0.0, START_SERVICE, DEPTH, OPENGL, TANGO,...);, the result of this call is opengl_T_depth_camera, you can then multiply this transform to the depth point returned from depth camera: P_opengl = opengl_T_depth_camera * P_depth_camera;. P_opengl is the point you can render out directly in OpenGL.
(2) Target_engine: I choose COORDINATE_FRAME_START_OF_SERVICE as base_frame , and DEVICE as target. choose OPENGL for base_engine. then choose any value for target_engine. the result is always same
This should be true for OPENGL and TANGO. There's a happy coincedent that opengl coordinate is same as the device frame coordinate. So if you put TANGO or OPENGL on the target_frame, the result will be the same. But if you put UNITY as target engine type, the result will be different.

Related

OpenGL ES / Vulkan: Per fragment stencil write/test (on Qualcomm Snapdragon XR2)

I would like to render two meshes, the first one writing into the stencil buffer and the second one testing against it.
I want to do that on a per fragment level though (the fragment shader of the first object should define which value to write into the stencil buffer and the fragment shader of the second object should define whether and against which stencil value the fragments of the second object should be clipped).
My Target Platform is the Oculus Quest 2, which has a Qualcomm Snapdragon XR.
If the platform would support GL_ARM_shader_framebuffer_fetch_depth_stencil, I could use that, but that's only supported on some Mali GPUs.
The reason I want to use stencils is that I want to render everything in a single forward rendering pass for performance reasons and since I'm already forced to use fragment discard in my shaders, early z-rejection is off the table anyway so that's not a concern.
How can I achieve per fragment stencil writing/testing on Qualcomm Snapdragon XR2 in either OpenGL ES 3.0 or Vulkan?
any pointers are appreciated.
I had to print out all available extensions on the Quest 2 recently for a project and can confirm that GL_ARM_shader_framebuffer_fetch_depth_stencil is supported.
To be clear though, this extension only enables reading the stencil value, not writing to it.
If it helps, these are the supported extensions:
GL_OES_EGL_image_external
GL_OES_EGL_sync
GL_OES_vertex_half_float
GL_OES_framebuffer_object
GL_OES_rgb8_rgba8
GL_OES_compressed_ETC1_RGB8_texture
GL_AMD_compressed_ATC_texture
GL_KHR_texture_compression_astc_ldr
GL_KHR_texture_compression_astc_hdr
GL_OES_texture_compression_astc
GL_OES_texture_npot
GL_EXT_texture_filter_anisotropic
GL_EXT_texture_format_BGRA8888
GL_EXT_read_format_bgra
GL_OES_texture_3D
GL_EXT_color_buffer_float
GL_EXT_color_buffer_half_float
GL_QCOM_alpha_test
GL_OES_depth24
GL_OES_packed_depth_stencil
GL_OES_depth_texture
GL_OES_depth_texture_cube_map
GL_EXT_sRGB
GL_OES_texture_float
GL_OES_texture_float_linear
GL_OES_texture_half_float
GL_OES_texture_half_float_linear
GL_EXT_texture_type_2_10_10_10_REV
GL_EXT_texture_sRGB_decode
GL_EXT_texture_format_sRGB_override
GL_OES_element_index_uint
GL_EXT_copy_image
GL_EXT_geometry_shader
GL_EXT_tessellation_shader
GL_OES_texture_stencil8
GL_EXT_shader_io_blocks
GL_OES_shader_image_atomic
GL_OES_sample_variables
GL_EXT_texture_border_clamp
GL_EXT_EGL_image_external_wrap_modes
GL_EXT_multisampled_render_to_texture
GL_EXT_multisampled_render_to_texture2
GL_OES_shader_multisample_interpolation
GL_EXT_texture_cube_map_array
GL_EXT_draw_buffers_indexed
GL_EXT_gpu_shader5
GL_EXT_robustness
GL_EXT_texture_buffer
GL_EXT_shader_framebuffer_fetch
GL_ARM_shader_framebuffer_fetch_depth_stencil
GL_OES_texture_storage_multisample_2d_array
GL_OES_sample_shading
GL_OES_get_program_binary
GL_EXT_debug_label
GL_KHR_blend_equation_advanced
GL_KHR_blend_equation_advanced_coherent
GL_QCOM_tiled_rendering
GL_ANDROID_extension_pack_es31a
GL_EXT_primitive_bounding_box
GL_OES_standard_derivatives
GL_OES_vertex_array_object
GL_EXT_disjoint_timer_query
GL_KHR_debug
GL_EXT_YUV_target
GL_EXT_sRGB_write_control
GL_EXT_texture_norm16
GL_EXT_discard_framebuffer
GL_OES_surfaceless_context
GL_OVR_multiview
GL_OVR_multiview2
GL_EXT_texture_sRGB_R8
GL_KHR_no_error
GL_EXT_debug_marker
GL_OES_EGL_image_external_essl3
GL_OVR_multiview_multisampled_render_to_texture
GL_EXT_buffer_storage
GL_EXT_external_buffer
GL_EXT_blit_framebuffer_params
GL_EXT_clip_cull_distance
GL_EXT_protected_textures
GL_EXT_shader_non_constant_global_initializers
GL_QCOM_texture_foveated
GL_QCOM_texture_foveated2
GL_QCOM_texture_foveated_subsampled_layout
GL_QCOM_shader_framebuffer_fetch_noncoherent
GL_QCOM_shader_framebuffer_fetch_rate
GL_EXT_memory_object
GL_EXT_memory_object_fd
GL_EXT_EGL_image_array
GL_NV_shader_noperspective_interpolation
GL_KHR_robust_buffer_access_behavior
GL_EXT_EGL_image_storage
GL_EXT_blend_func_extended
GL_EXT_clip_control
GL_OES_texture_view
GL_EXT_fragment_invocation_density
GL_QCOM_motion_estimation
GL_QCOM_validate_shader_binary
GL_QCOM_YUV_texture_gather
GL_IMG_texture_filter_cubic```
You can have per-invocation stencil reference values with VK_EXT_shader_stencil_export. Nevertheless that extension is not widely supported.
I am not sure what you are trying to do, but it seems you will need to find another way.

Figuring out coordinate format contained in SDO_ORDINATE_ARRAY attribute

I am working with some data which specifies an installation path, in another data source I have the location of events based on their lat/long location.
The installation location contained in the oracle attribute SDO_ORDINATE_ARRAY does not match any X/Y geographic coordinate system I am familiar with (Lat/Long or UTM). Is there a way to figure out what the data type is that is stored in the SDO_ORDINATE_ARRAY?
Here is an example of the data for a path with 3 (x,y) points:
MDSYS.SDO_GEOMETRY(2002,1026911,NULL,
MDSYS.SDO_ELEM_INFO_ARRAY(1,2,1),
MDSYS.SDO_ORDINATE_ARRAY(
1352633.64991299994289875030517578125,
12347411.6615570001304149627685546875,
1352638.02988700009882450103759765625,
12347479.02890899963676929473876953125,
1352904.06293900008313357830047607421875,
12347470.76137300021946430206298828125,
))
The above should be roughly within the proximity to 33.9845° N, 117.5159° W, and I went through various conversions but could not find anything that led me anywhere close to the above.
I read through the documentation on SDO_GEOMETRY from the oracle page and did not find any help in figuring out what the data type is.
https://docs.oracle.com/database/121/SPATL/sdo_geometry-object-type.htm#SPATL494
Alternatively, if there is a way I can type in the lat/long somewhere to see all of the different coordinate types which are equivalent, I might also be able to figure out which format this is.
Looks like there is a typo inside MDSYS.SDO_GEOMETRY(2002,1026911,NULL,
1026911 is supposed to be a SRS - Spatial Reference System.
If we remove the first 1 we have 102691, and that is a very well known SRS code.
ESRI:102691 for NAD 1983 for StatePlane Minnesota North FIPS 2201 Feet
The corresponding WKT gives you all the necessary information to perform any coordinate conversion:
PROJCS["NAD_1983_StatePlane_Minnesota_North_FIPS_2201_Feet",
GEOGCS["GCS_North_American_1983",
DATUM["North_American_Datum_1983",
SPHEROID["GRS_1980",6378137,298.257222101]],
PRIMEM["Greenwich",0],
UNIT["Degree",0.017453292519943295]],
PROJECTION["Lambert_Conformal_Conic_2SP"],
PARAMETER["False_Easting",2624666.666666666],
PARAMETER["False_Northing",328083.3333333333],
PARAMETER["Central_Meridian",-93.09999999999999],
PARAMETER["Standard_Parallel_1",47.03333333333333],
PARAMETER["Standard_Parallel_2",48.63333333333333],
PARAMETER["Latitude_Of_Origin",46.5],
UNIT["Foot_US",0.30480060960121924],
AUTHORITY["EPSG","102691"]]

Save a figure to file with specific resolution

In an old version of my code, I used to do a hardcopy() with a given resolution, ie:
frame = hardcopy(figHandle, ['-d' renderer], ['-r' num2str(round(pixelsperinch))]);
For reference, hardcopy saves a figure window to file.
Then I would typically perform:
ZZ = rgb2gray(frame) < 255/2;
se = strel('disk',diskSize);
ZZ2 = imdilate(ZZ,se); %perform dilation.
Surface = bwarea(ZZ2); %get estimated surface (in pixels)
This worked until I switched to Matlab 2017, in which the hardcopy() function is deprecated and we are left with the print() function instead.
I am unable to extract the data from figure handler at a specific resolution using print. I've tried many things, including:
frame = print(figHandle, '-opengl', strcat('-r',num2str(round(pixelsperinch))));
But it doesn't work. How can I overcome this?
EDIT
I don't want to 'save' nor create a figure file, my aim is to extract the data from the figure in order to mesure a surface after a dilation process. I just want to keep this information and since 'im processing a LOT of different trajectories (total is approx. 1e7 trajectories), i don't want to save each file to disk (this is costly, time execution speaking). I'm running this code on a remote server (without a graphic card).
The issue I'm struggling with is: "One or more output arguments not assigned during call to "varargout"."
getframe() does not allow for setting a specific resolution (it uses current resolution instead as far as I know)
EDIT2
Ok, figured out how to do, you need to pass the '-RGBImage' argument like this:
frame = print(figHandle, ['-' renderer], ['-r' num2str(round(pixelsperinch))], '-RGBImage');
it also accept custom resolution and renderer as specified in the documentation.
I think you must specify formattype too (-dtiff in my case). I've tried this in Matlab 2016b with no problem:
print(figHandle,'-dtiff', '-opengl', '-r600', 'nameofmyfig');
EDIT:
If you need the CData just find the handle of the corresponding axes and get its CData
f = findobj('Tag','mytag')
Then depending on your matlab version use:
mycdata = get(f,'CData');
or directly
mycdta = f.CData;
EDIT 2:
You can set the tag of your image programatically and then do what I said previously:
a = imshow('peppers.png');
set(a,'Tag','mytag');

Ruby SketchUp ... add a measurment

still learning about Ruby + Sketchup!
Today,I would like to add a measurement (good english word ?) as I can do manually with the 'cotation' (french version) tool when I click to point then drag the measure text.
Can't find that in the docs to do with Ruby and API ...
Thanks for your help
You are probably looking for the Sketchup::Entities::add_dimension_linear method.
http://ruby.sketchup.com/Sketchup/Entities.html#add_dimension_linear-instance_method
Assuming a and b below are edges
voffset = [-20, 0, 0]
Sketchup.active_model.entities.add_dimension_linear(a.start, b.start, voffset)
The value of voffset controls not just how far offset the dimension is, but also the axis along which the measurement is made. You may need to experiment with different values to get a feeling for how that determination is done. As with many things in SketchUp, it often guesses (or 'infers') at what you want.

get a coordinates of object(component) in Sketchup( in ruby console)

i have a question, how can i get a cooridinates (x,y,z) in ruby console of an object (component)? I need this cooridnates for send this coordinates to other object. Thanks.
"A coordinate" is a little bit ambiguous, depending if you want a point from the bounding box, the insertion point or a vertex inside the component.
But a simple generic example would be:
# Assuming user has selected a ComponentInstance:
instance = Sketchup.active_model.selection[0]
puts instance.transformation.origin
ComponentInstance.transformationSketchUp 6.0+
The transformationmethod is used to retrieve the transformation of this instance.
http://www.sketchup.com/intl/en/developer/docs/ourdoc/componentinstance.php#transformation
Transformation.originSketchUp 6.0+
The origin method retrieves the origin of a rigid transformation.
http://www.sketchup.com/intl/en/developer/docs/ourdoc/transformation.php#origin

Resources