New Tango 3D Reconstruction API - google-project-tango

I use the new 3D reconstruction API's (MIRA release). I have a problem when a call the Tango3DR_update function. It returns TANGO_3DR_INVALID code when I set the parameters associated with an image camera (const Tango3DR_ImageBuffer * image * const Tango3DR_Pose image_pose, Tango3DR_CameraCalibration const * calibration). I have checked my parameters, they seem to be correct. When I call this function without image parameters, this to work properly ... Is this a known bug?
thank you in advance for your answers.

TLDR; The support library ImageBufferManager has a bug with strides. Do color_image.stride = image_buffer->width; when creating your Tango3DR_ImageBuffer.
I think there are two things :
Image Format
First, you have to make sure to use the TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP. You can do that by using the ImageBufferManager from the support library.
ImageBufferManager and strides
Second, there is a catch if you use the support library ImageBufferManager though. TangoSupport_getLatestImageBuffer seems to fail to initialize the stride of the returned image (I got 0 and some other very large values) which the 3DR library doesn't like. The original TangoImageBuffer from OnColorAvailable has stride=1280 (=image_width) and forcing that value on the TangoImageBuffer returned
from the ImageBufferManager seems to fix the issue. I believe this is a bug in ImageBufferManager.
This means doing
color_image.stride = image_buffer->width;
instead of
color_image.stride = image_buffer->stride
when creating the Tango3DR_ImageBuffer.
Full code example
I got it working with the following code in my Render method :
TangoImageBuffer* image_buffer;
ret = TangoSupport_getLatestImageBuffer(
image_buffer_manager_, &image_buffer);
if (ret != TANGO_SUCCESS) {
LOG(ERROR) << "Error in TangoSupport_getLatestImageBuffer";
}
...
Tango3DR_ImageBuffer color_image;
color_image.width = image_buffer->width;
color_image.height = image_buffer->height;
// VERY Important - The support library ImageBufferManager seems to have
// a bug where it will always put the stride of the returned buffer
// at 0, which causes 3DR to fail
color_image.stride = image_buffer->width;
color_image.timestamp = image_buffer->timestamp;
color_image.format = (Tango3DR_ImageFormatType)image_buffer->format;
color_image.data = image_buffer->data;
ret = Tango3DR_update(
tango_3dr_context_,
&cloud,
&depth_pose_3dr,
&color_image,
&color_pose_3dr,
&tango_3dr_calibration_,
&updated_indices);
I am using the ImageManager from the support library. So my OnColorAvailable looks like that
void SynchronizationApplication::OnColorAvailable(
const TangoImageBuffer* buffer) {
if (tango_3dr_enabled_ && tango_3dr_use_color_) {
TangoErrorType ret = TangoSupport_updateImageBuffer(
image_buffer_manager_, buffer);
if (ret != TANGO_SUCCESS) {
LOG(ERROR) << "Error in TangoSupport_updatePointCloud";
}
}
}
And the image_buffer_manager_ is initialized as follow (the pixel format might be important).
TangoSupport_createImageBufferManager(
TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP,
image_width_,
image_height_,
&image_buffer_manager_
);
I am copying the calibration as follow :
void CopyCalibrationTangoTo3DR(const TangoCameraIntrinsics& tango,
Tango3DR_CameraCalibration* out) {
out->calibration_type =
(Tango3DR_TangoCalibrationType)tango.calibration_type;
out->cx = tango.cx;
out->cy = tango.cy;
memcpy(out->distortion, tango.distortion, sizeof(double) * 5);
out->fx = tango.fx;
out->fy = tango.fy;
out->height = tango.height;
out->width = tango.width;
}

Related

Optimizations causing errors

std::vector<VkWriteDescriptorSet> writeDescriptorSets;
for (int index = 0; index < descriptorBindings.size(); index++)
{
VkWriteDescriptorSet writeDescriptorSet = {};
// Binding 0 : Uniform buffer
writeDescriptorSet.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
writeDescriptorSet.dstSet = descriptorSet;
// Binds this uniform buffer to binding point 0
writeDescriptorSet.dstBinding = index;
writeDescriptorSet.descriptorCount = descriptorBindings[index].Count;
writeDescriptorSet.pNext = nullptr;
writeDescriptorSet.pTexelBufferView = nullptr;
if (descriptorBindings[index].Type == DescriptorType::UniformBuffer)
{
VkDescriptorBufferInfo uniformBufferDescriptor = {};
uniformBufferDescriptor.buffer = descriptorBindings[index].UniformBuffer->buffer;
uniformBufferDescriptor.offset = 0;
uniformBufferDescriptor.range = descriptorBindings[index].UniformBuffer->size;
writeDescriptorSet.descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
writeDescriptorSet.pBufferInfo = &uniformBufferDescriptor;
}
else if (descriptorBindings[index].Type == DescriptorType::TextureSampler)
{
VkDescriptorImageInfo textureDescriptor = {};
textureDescriptor.imageView = descriptorBindings[index].Texture->imageView->imageView; // The image's view (images are never directly accessed by the shader, but rather through views defining subresources)
textureDescriptor.sampler = descriptorBindings[index].Texture->sampler; // The sampler (Telling the pipeline how to sample the texture, including repeat, border, etc.)
textureDescriptor.imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL; // The current layout of the image (Note: Should always fit the actual use, e.g. shader read)
//printf("%d\n", textureDescriptor.imageLayout);
writeDescriptorSet.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
writeDescriptorSet.pImageInfo = &textureDescriptor;
}
writeDescriptorSets.push_back(writeDescriptorSet);
}
vkUpdateDescriptorSets(logicalDevice, writeDescriptorSets.size(), writeDescriptorSets.data(), 0, nullptr);
I am really scratching my head over this. If I enabled optimizations inside Visual Studio then the textureDescriptor.imageLayout line, and probably the rest of the textureDescriptor, gets optimized out and it causes errors in Vulkan. If I comment out the printf below it then no problem. I suspect that the compiler detects that imageLayout is being used and doesn't get rid of it.
Do I even need optimizations? If so how can I prevent it from removing that code?
textureDescriptor is not being "optimized out". It's a stack variable whose lifetime ended before you ever give it to Vulkan.
You're going to have to create those objects in some kind of way that will outlive the block in which they were created. It needs to the call to vkUpdateDescriptorSets.

Can't make OpenGL, glew, and SDL2 work together

// SDL 2.0.6, glew 2.1.0
SDL_Init(SDL_INIT_EVENTS | SDL_INIT_VIDEO);
SDL_Window *w = SDL_CreateWindow("Open GL", 0, 0, 1000, 1000, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);
SDL_GLContext ctx = SDL_GL_CreateContext(w);
// values returned by SDL_GL_GetAttribute are commented
SDL_GL_SetAttribute(SDL_GLattr::SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE); // doesn't help
SDL_GL_SetAttribute(SDL_GLattr::SDL_GL_CONTEXT_MAJOR_VERSION, 4); // 4
SDL_GL_SetAttribute(SDL_GLattr::SDL_GL_CONTEXT_MINOR_VERSION, 5); // 5, these two accept even 10.10 without error actually, I also tried not calling them and had 2.1 in return
SDL_GL_SetAttribute(SDL_GLattr::SDL_GL_DOUBLEBUFFER, 1); // 1
SDL_GL_SetAttribute(SDL_GLattr::SDL_GL_RED_SIZE, 8); // 8
SDL_GL_SetAttribute(SDL_GLattr::SDL_GL_GREEN_SIZE, 8); // 8
SDL_GL_SetAttribute(SDL_GLattr::SDL_GL_BLUE_SIZE, 8); // 8
SDL_GL_SetAttribute(SDL_GLattr::SDL_GL_DEPTH_SIZE, 24); // 16
SDL_GL_SetAttribute(SDL_GLattr::SDL_GL_STENCIL_SIZE, 8); // 0
//SDL_GLContext ctx = SDL_GL_CreateContext(w); // nothing gets rendered if this is called here instead
//SDL_GL_MakeCurrent(w, ctx); // doesn't help
if (ctx == 0){ // never fails
cout << "context creation error" << endl;
}
glewExperimental = true;
GLenum e = GLEW_OK;
e = glewInit();
if (e != GLEW_OK){ // never fails
cout << "glew error" << endl;
}
My stencil buffer wasn't working so I came to this in my investigation. All SDL_GL_SetAttribute functions return 0. The same code (excluding glew) was tested on a laptop with Ubuntu and returns 24/8 for depth/stencil. What am I doing wrong?
The documentation of SDL_GL_SetAttribute states
Use this function to set an OpenGL window attribute before window creation.
This functions do not have any effect if called after the window has been created.
You cannot change the attributes of an existing context. SDL_GL_SetAttribute will set the attributes that SDL will use the next time a GL context is created.
Now you actually tried to create the context later:
//SDL_GLContext ctx = SDL_GL_CreateContext(w); // nothing gets rendered if this is called here instead
The most likely explanation is that the
SDL_GL_SetAttribute(SDL_GLattr::SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE); // doesn't help
actually does work, and your code is just not comatible with a core profile.

What is the Cocoa method for doing the Carbon FSExchangeObjectsCompat call?

There was this great function in the old MoreFilesX, FSExchangeObjectsCompat, that "exchanges the data between two files". It was typically used as part of a safe-save approach, where a temp file was written out, then FSExchangeObjectsCompat was called to exchange the newly-saved temp file with the old "original" file. It preserved all the metadata, privileges, etc.
I'm seeing a failure with this function on High Sierra, on APFS volumes, which never failed on HFS+ volumes. Not a big surprise -- many of those calls are deprecated.
But what is the Cocoa NSFileManager method of doing the same thing?
You want -[NSFileManager replaceItemAtURL:withItemAtURL:backupItemName:options:resultingItemURL:error:].
You can do something similar using lower-level functions. Here's code I wrote to be used with a pre-10.12 SDK. You can make it somewhat simpler if you compile against the 10.12 SDK or later, and even simpler if you have a deployment target that is 10.12 or later.
#ifndef RENAME_SWAP
#define RENAME_SWAP 0x00000002
#endif
/*!
#function ExchangeFiles
#abstract Given full paths to two files on the same volume,
swap their contents.
#discussion This is often part of a safe-save strategy.
#param inOldFile Full path to a file.
#param inNewFile Full path to a file.
#result 0 if all went well, -1 otherwise.
*/
int ExchangeFiles( const char* inOldFile, const char* inNewFile )
{
int result = -1;
static dispatch_once_t sOnce = 0;
static renameFuncType sRenameFunc = NULL;
// Try to get a function pointer to renamex_np, which is available in OS 10.12 and later.
dispatch_once( &sOnce,
^{
sRenameFunc = (renameFuncType) dlsym( RTLD_DEFAULT, "renamex_np" );
});
// renamex_np is only available on OS 10.12 and later, and does not work on HFS+ volumes
// but does work on APFS volumes. Being the latest and greatest, we try it first.
if (sRenameFunc != NULL)
{
result = (*sRenameFunc)( inOldFile, inNewFile, RENAME_SWAP );
}
if (result != 0)
{
// exchangedata is an older function that works on HFS+ but not APFS.
result = exchangedata( inOldFile, inNewFile, 0 );
}
if (result != 0)
{
// Neither function worked, we must go old school.
std::string nameTemplate( inOldFile );
nameTemplate += "-swapXXXX";
// Make a mutable copy of the template
std::vector<char> workPath( nameTemplate.size() + 1 );
memcpy( &workPath[0], nameTemplate.c_str(), nameTemplate.size() + 1 );
mktemp( &workPath[0] );
std::string tempPath( &workPath[0] );
// Make the old file have a temporary name
result = rename( inOldFile, tempPath.c_str() );
// Put the new file data under the old name.
if (result == 0)
{
result = rename( inNewFile, inOldFile );
}
// Put the old data under the new name.
if (result == 0)
{
result = rename( tempPath.c_str(), inNewFile );
}
}
return result;
}

DX11: Indexed drawing doesn't produce any visual output

For our student project I've been tinkering with an OBJ-loader in order to import models into our application.
It loads without issues, and drawing it kind of works without index (the model is obviously not represented correctly because I'm not using an index buffer)
However, drawing with DeviceContext->DrawIndexed shows nothing on screen.
Without indexed drawing
With indexed drawing
Buffer creation method:
void ObjectLoader::CreateBuffers()
{
//Index buffer
D3D11_BUFFER_DESC iBufferDesc;
memset(&iBufferDesc, 0, sizeof(iBufferDesc));
iBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
iBufferDesc.Usage = D3D11_USAGE_DEFAULT;
iBufferDesc.ByteWidth = sizeof(DWORD);
D3D11_SUBRESOURCE_DATA indexData;
indexData.pSysMem = &ind;
pDevice->CreateBuffer(&iBufferDesc, &indexData, &pIndexBuffer);
//Vertex buffer
D3D11_BUFFER_DESC bufferDesc;
memset(&bufferDesc, 0, sizeof(bufferDesc));
bufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bufferDesc.Usage = D3D11_USAGE_DEFAULT;
bufferDesc.ByteWidth = sizeof(TriangleVertex) * this->NumberOfVerts();
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = tva;
pDevice->CreateBuffer(&bufferDesc, &data, &pVertexBuffer);
}
Draw method:
void ObjectLoader::Draw()
{
if (pDevice == nullptr)
return;
UINT32 vertexSize = sizeof(float) * 5;
UINT32 offset = 0;
pDeviceContext->IASetVertexBuffers(0, 1, &pVertexBuffer, &vertexSize, &offset);
pDeviceContext->IASetIndexBuffer(this->pIndexBuffer, DXGI_FORMAT_R32_UINT, 0);
pDeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
pDeviceContext->DrawIndexed(vIndex.size(),0 , 0);
//pDeviceContext->Draw(this->NumberOfVerts(), 0);
}
What the hell am I missing? I've looked at several books on indexed drawing and it seems pretty straight-forward. At first I thought the winding order was reversed but I checked this by simply reversing the index array; same result.
If you need more code let me know, but I feel this should suffice.
Thanks in advance!
Edit: OT: I never figured out how to get my code to be properly formatted so I apologize for that, feel free to share how that's done.

How to get current display mode (resolution, refresh rate) of a monitor/output in DXGI?

I am creating a multi-monitor full screen DXGI/D3D application. I am enumerating through the available outputs and adapters in preparation of creating their swap chains.
When creating my swap chain using DXGI's IDXGIFactory::CreateSwapChain method, I need to provide a swap chain description which includes a buffer description of type DXGI_MODE_DESC that details the width, height, refresh rate, etc. How can I find out what the output is currently set to (or how can I find out what the display mode of the output currently is)? I don't want to change the user's resolution or refresh rate when I go to full screen with this swap chain.
After looking around some more I stumbled upon the EnumDisplaySettings legacy GDI function, which allows me to access the current resolution and refresh rate. Combining this with the IDXGIOutput::FindClosestMatchingMode function I can get pretty close to the current display mode:
void getClosestDisplayModeToCurrent(IDXGIOutput* output, DXGI_MODE_DESC* outCurrentDisplayMode)
{
DXGI_OUTPUT_DESC outputDesc;
output->GetDesc(&outputDesc);
HMONITOR hMonitor = outputDesc.Monitor;
MONITORINFOEX monitorInfo;
monitorInfo.cbSize = sizeof(MONITORINFOEX);
GetMonitorInfo(hMonitor, &monitorInfo);
DEVMODE devMode;
devMode.dmSize = sizeof(DEVMODE);
devMode.dmDriverExtra = 0;
EnumDisplaySettings(monitorInfo.szDevice, ENUM_CURRENT_SETTINGS, &devMode);
DXGI_MODE_DESC current;
current.Width = devMode.dmPelsWidth;
current.Height = devMode.dmPelsHeight;
bool useDefaultRefreshRate = 1 == devMode.dmDisplayFrequency || 0 == devMode.dmDisplayFrequency;
current.RefreshRate.Numerator = useDefaultRefreshRate ? 0 : devMode.dmDisplayFrequency;
current.RefreshRate.Denominator = useDefaultRefreshRate ? 0 : 1;
current.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
current.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
current.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
output->FindClosestMatchingMode(&current, outCurrentDisplayMode, NULL);
}
...But I don't think that this is really the correct answer because I'm needing to use legacy functions. Is there any way to do this with DXGI to get the exact current display mode rather than using this method?
I saw solution here:
http://www.rastertek.com/dx11tut03.html
In folow part:
// Now go through all the display modes and find the one that matches the screen width and height.
// When a match is found store the numerator and denominator of the refresh rate for that monitor.
for(i=0; i<numModes; i++)
{
if(displayModeList[i].Width == (unsigned int)screenWidth)
{
if(displayModeList[i].Height == (unsigned int)screenHeight)
{
numerator = displayModeList[i].RefreshRate.Numerator;
denominator = displayModeList[i].RefreshRate.Denominator;
}
}
}
Is my understanding correct, the available resolution is in the displayModeList.
This might be what you are looking for:
// Get display mode list
std::vector<DXGI_MODE_DESC*> modeList = GetDisplayModeList(*outputItor);
for(std::vector<DXGI_MODE_DESC*>::iterator modeItor = modeList.begin(); modeItor != modeList.end(); ++modeItor)
{
// PrintDisplayModeInfo(*modeItor);
}
}
std::vector<DXGI_MODE_DESC*> GetDisplayModeList(IDXGIOutput* output)
{
UINT num = 0;
DXGI_FORMAT format = DXGI_FORMAT_R32G32B32A32_TYPELESS;
UINT flags = DXGI_ENUM_MODES_INTERLACED | DXGI_ENUM_MODES_SCALING;
// Get number of display modes
output->GetDisplayModeList(format, flags, &num, 0);
// Get display mode list
DXGI_MODE_DESC * pDescs = new DXGI_MODE_DESC[num];
output->GetDisplayModeList(format, flags, &num, pDescs);
std::vector<DXGI_MODE_DESC*> displayList;
for(int i = 0; i < num; ++i)
{
displayList.push_back(&pDescs[i]);
}
return displayList;
}

Resources