Does setting exposure/ISO in Google Tango config work? - google-project-tango

I am trying to use the Tango device to capture HDR images, but no matter how I set the Tango config ISO and exposure settings, there is no apparent change in the image.
I am disabling the auto-exposure and auto-white balance and setting manual values for the ISO and Exposure time. Regardless of my settings, the colour camera images returned from onFrameAvailable always seem to be in auto mode. The measured average RGB of a given scene is the same, regardless of setting the ISO to 100, 200, 400, or 800, and the exposure to 11.1 ms or 2, 8 or 1/2 times this amount. It seems to still be in auto mode, because I point the device towards a bright window and the window appears pure white for 1 second, then the brightness drops and I can see what is outside the window.
So my Yellowstone tablet is up to date (KOT49H.150731) and I have the Turing release of the client API. I am using the C api with an app that is basically a combination of the example programs for motion tracking, depth, and augmented reality. Is the following code supposed to work?
const bool autoExposure = false;
const int32_t iso = 800;
const double exposure = 11.1*2.0; // milliseconds
if ( TangoConfig_setBool( config_, "config_color_mode_auto", autoExposure) != TANGO_SUCCESS) {
LOGE("config_color_mode_auto Failed");
return false;
}
if ( TangoConfig_setInt32(config_ , "config_color_iso", iso) != TANGO_SUCCESS) {
LOGE("config_color_iso Failed");
return false;
}
if ( TangoConfig_setInt32(config_ , "config_color_exp", (int32_t)::floor(exposure*1e6)) != TANGO_SUCCESS) {
LOGE("config_color_exp Failed");
return false;
}
bool verifyAutoExposureState;
int32_t verifyIso, verifyExp;
TangoConfig_getBool( config_, "config_color_mode_auto", &verifyAutoExposureState );
TangoConfig_getInt32( config_, "config_color_iso", &verifyIso );
TangoConfig_getInt32( config_, "config_color_exp", &verifyExp );
LOGE( "config_colour autoExposure=%s %d %d", verifyAutoExposureState?"On" : "Off", verifyIso, verifyExp );
The reason to use the Tango API for capturing HDR on Android instead of going through the Android API is to get pose estimates along with the images.

Related

Using DEFAULT_GUI_FONT in high DPI Windows application

I have a Windows application which I want to look good at high DPI monitors. The application is using DEFAULT_GUI_FONT in lots of places, and the font created this way doesn't scale correctly.
Is there any simple way to fix this problem with not too much pain?
you need get NONCLIENTMETRICS by SystemParametersInfo(SPI_GETNONCLIENTMETRICS,) and then use it LOGFONT data, for create self font. or you can query for SystemParametersInfo(SPI_GETICONTITLELOGFONT) and use it
The recommended fonts for different purposes can be obtained from the NONCLIENTMETRICS structure.
For automatically DPI-scaled fonts (Windows 10 1607+, must be per-monitor DPI-aware):
// Your window's handle
HWND window;
// Get the DPI for which your window should scale to
UINT dpi = GetDpiForWindow(window);
// Obtain the recommended fonts, which are already correctly scaled for the current DPI
NONCLIENTMETRICSW non_client_metrics;
if (!SystemParametersInfoForDpi(SPI_GETNONCLIENTMETRICS, sizeof(non_client_metrics), &non_client_metrics, 0, dpi)
{
// Error handling
}
// Create an appropriate font(s)
HFONT message_font = CreateFontIndirectW(&non_client_metrics.lfMessageFont);
if (!message_font)
{
// Error handling
}
For older Windows versions you can use the system-wide DPI and scale the font manually (Windows 7+, must be system DPI-aware):
// Your window's handle
HWND window;
// Obtain the recommended fonts, which are already correctly scaled for the current DPI
NONCLIENTMETRICSW non_client_metrics;
if (!SystemParametersInfoW(SPI_GETNONCLIENTMETRICS, sizeof(non_client_metrics), &non_client_metrics, 0)
{
// Error handling
}
// Get the system-wide DPI
HDC hdc = GetDC(nullptr);
if (!hdc)
{
// Error handling
}
UINT dpi = GetDeviceCaps(hdc, LOGPIXELSY);
ReleaseDC(nullptr, hdc);
// Scale the font(s)
constexpr UINT font_size = 12;
non_client_metrics.lfMessageFont.lfHeight = -((font_size * dpi) / 72);
// Create the appropriate font(s)
HFONT message_font = CreateFontIndirectW(&non_client_metrics.lfMessageFont);
if (!message_font)
{
// Error handling
}
NONCLIENTMETRICS has also many other fonts in it. Make sure to choose the right one for your purpose.
You should set the DPI-awareness level in your application manifest as described here for best compatibility.
WinForms in the .NET framework internally converts the DEFAULT_GUI_FONT (which is in fact used to get the default font for WinForms Forms and Controls in most situations) by scaling its height from pixels (which is the unit GDI fonts use natively) to Points (which is preferred by GDI+). Drawing text using points implies that the physical size of the rendered text depends on the monitor DPI setting.
System.Drawing.Font.SizeInPoints:
float emHeightInPoints;
IntPtr screenDC = UnsafeNativeMethods.GetDC(NativeMethods.NullHandleRef);
try {
using( Graphics graphics = Graphics.FromHdcInternal(screenDC)){
float pixelsPerPoint = (float) (graphics.DpiY / 72.0);
float lineSpacingInPixels = this.GetHeight(graphics);
float emHeightInPixels = lineSpacingInPixels * FontFamily.GetEmHeight(Style) / FontFamily.GetLineSpacing(Style);
emHeightInPoints = emHeightInPixels / pixelsPerPoint;
}
}
finally {
UnsafeNativeMethods.ReleaseDC(NativeMethods.NullHandleRef, new HandleRef(null, screenDC));
}
return emHeightInPoints;
Obviously you cannot use this directly as it's C#. But besides that, this article suggests that you should scale pixel dimensions assuming a 96 dpi design, and use GetDpiForWindow to determine the actual DPI. Note that the "72" in the formula above has nothing to do with the monitor DPI setting, it comes from the fact that .NET likes to use fonts specified in points rather than pixels (otherwise just scale the LOGFONT's height by DPIy/96).
This site suggests something similar, but with GetDpiForMonitor.
I cannot say for sure whether the general approach of manually scaling the font size according to some DPI-dependent factor is a robust and future-proof for scaling fonts (it seems to be the way to go about scaling non-font GUI elements though). However, since .NET basically also just calculates some magic factor based on some sort of DPI value, it's probably a pretty good guess.
Also, you'll want to cache that HFONT. HFONT - LOGFONT conversions are not negligible.
See also (references):
WinForms gets its default using GetStockObject(DEFAULT_GUI_FONT) (there are a few exceptions though, mostly obsolete):
IntPtr handle = UnsafeNativeMethods.GetStockObject(NativeMethods.DEFAULT_GUI_FONT);
try {
Font fontInWorldUnits = null;
// SECREVIEW : We know that we got the handle from the stock object,
// : so this is always safe.
//
IntSecurity.ObjectFromWin32Handle.Assert();
try {
fontInWorldUnits = Font.FromHfont(handle);
}
finally {
CodeAccessPermission.RevertAssert();
}
try{
defaultFont = FontInPoints(fontInWorldUnits);
}
finally{
fontInWorldUnits.Dispose();
}
}
catch (ArgumentException) {
}
https://referencesource.microsoft.com/#System.Drawing/commonui/System/Drawing/SystemFonts.cs,355
The HFONT is converted to GDI+, and then the GDI+ font retrieved this way is transformed using FontInPoints:
private static Font FontInPoints(Font font) {
return new Font(font.FontFamily, font.SizeInPoints, font.Style, GraphicsUnit.Point, font.GdiCharSet, font.GdiVerticalFont);
}
https://referencesource.microsoft.com/#System.Drawing/commonui/System/Drawing/SystemFonts.cs,452
The content of the SizeInPoints getter is already listed above.
https://referencesource.microsoft.com/#System.Drawing/commonui/System/Drawing/Advanced/Font.cs,992

Bad relocalization after motion tracking loss

With my team we want to implement area learning for relocalization purposes in our projects.
I added this functionnality and it seems to work well. But when a drift disaster happens (motion tracking lost) and that the main camera is instantaneously projected in "the other side of the universe" the program doesn't succeed in relocalizing it : the camera is 2 meters below, or 3 meters beside than where it should be.
Is it an area description error (because it has got not enough point of interests) ?
Or I still have not understood how to use area learning ?
Thanks a lot.
P.S.:
I use the Unity SDK.
public void Update()
{
TangoPoseData pose = new TangoPoseData ();
TangoCoordinateFramePair pair;
if(poseLocalized)
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
else
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
double timestamp = VideoOverlayProvider.RenderLatestFrame(TangoEnums.TangoCameraId.TANGO_CAMERA_COLOR);
PoseProvider.GetPoseAtTime (pose, timestamp, pair);
m_status = pose.status_code;
if (pose.status_code == TangoEnums.TangoPoseStatusType.TANGO_POSE_VALID)
{
// it does not differ with the pair base frame
Matrix4x4 ssTd = UpdateTransform(pose);
m_uwTuc = m_uwTss * ssTd * m_dTuc;
}
}
public void OnTangoPoseAvailable(TangoPoseData pose)
{
if (pose == null)
{
return;
}
// Relocalization signal
if (pose.framePair.baseFrame == TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION &&
pose.framePair.targetFrame == TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE)
{
poseLocalized = true;
}
// If pose status is not valid, nothing is valid
if (!(pose.status_code == TangoEnums.TangoPoseStatusType.TANGO_POSE_VALID))
{
poseLocalized = false;
// Do I forget something here ?
}
}
I've regularly observed that the localization and re-localization of the Area Learning can produce x,y Pose coordinates off by a few meters.
Coordinates can be more accurate if I take more care in recording an area well before moving to a new area.
Upon re-localization the coordinate accuracy is improved if the tablet is able to observe the area using slow, consistent movements before traveling to a new area.
If I learn a new area I always return to a well known area for better accuracy as described by drift correction:
I have two Tango tablets using a Java app that is autonomously navigating an iRobot in my home. I've setup a grid test site using 1 meter tape marks to make the observations.

AS3: FPS drops down significantly because of innocent mouse moves

I have ~10000 objects in my game and exactly 60 (maximum) FPS when mouse isn't moved. But just when you start moving mouse in circles FPS tries to reach 30 averaging at 45. When you stop mouse it's INSTANTLY 60 (as so program lost it's heartbeat). SWF is run standalone - without any browsers.
I removed any MouseEvent.MOUSE_MOVE listeners and made mouseEnabled=false and mouseChildren=false for the main class.
I increased my FPS one-by-one from 12 to 60 - I gave name to each frame I born and it's really painful watching how 15 of them die just because of nothing...
Sample code:
public class Main extends Sprite
{
private var _periods : int = 0;
/** Idling FPS is 23. Move mouse to drop FPS to 21.*/
public function Main() : void
{
//true if need to drop FPS to 18 instead of 21 moving mouse:
const readyToKill2MoreFrames : Boolean = true;
if ( readyToKill2MoreFrames )
{
var ellipse : Sprite = new Sprite;
ellipse.graphics.beginFill( 0x00FF00 );
ellipse.graphics.drawEllipse( 300, 300, 400, 200 );
ellipse.graphics.endFill();
addChild( ellipse );
//uncomment to fall only to 21 instead of 18:
/*ellipse.mouseChildren = false;
ellipse.mouseEnabled = false;*/
}
var fps : TextField = new TextField;
//uncommenting doesn't change FPS:
//fps.mouseEnabled = false;
addChild( fps );
fps.text = "???";
fps.scaleX = fps.scaleY = 3;
var timer : Timer = new Timer( 1000 );
timer.addEventListener( TimerEvent.TIMER, function( ... args ) : void
{
fps.text = _periods.toString();
_periods = 0;
} );
timer.start();
addEventListener( Event.ENTER_FRAME, function( ... args ) : void
{
//seems like PC is too fast to demonstrate mouse movement
// drawbacks when he has nothing else to do, so let's make
// his attention flow:
for ( var i : int = 0; i < 500000; ++i )
{
var j : int = 2 + 2;
}
++_periods;
} );
}
}
You've probably moved on to more modern problems, but I've recently struggled with this issue myself, so here's an answer for future unfortunates stuck with the problems created by Adobe's decade-old sins.
It turns out legacy support for old-style buttons is the culprit. Quote from Adobe's tutorial on the excellent Scout profiling tool:
"Flash Player has some special code to handle old-style button objects (the kind that you create in Flash Professional).
Independently of looking for ActionScript event handlers for mouse
events, it searches the display list for any of these buttons whenever
the mouse moves. This can be expensive if you have a large number of
objects on the display list. Unfortunately, this operation happens
even if you don't use old-style button objects, but Adobe is working
on a fix for this."
Turns out Adobe never really got around to fixing this, so any large number of DisplayObjects will wreak havoc on your FPS while the mouse is moved. Only fix is to merge them somehow, e.g. by batch drawing them using Graphics. In my early tests, it seems setting mouseEnabled = false has no real effect, either.

WP7 zxing scan not reliable

I've printed a few short qr-codes (like "HAEB16653") on a page using this algorythm:
private void CreateQRCodeFile(int size, string filename, string codecontent)
{
QRCodeWriter writer = new QRCodeWriter();
com.google.zxing.common.ByteMatrix matrix;
matrix = writer.encode(codecontent, BarcodeFormat.QR_CODE, size, size, null);
Bitmap img = new Bitmap(size, size);
Color Color = Color.FromArgb(0, 0, 0);
for (int y = 0; y < matrix.Height; ++y)
{
for (int x = 0; x < matrix.Width; ++x)
{
Color pixelColor = img.GetPixel(x, y);
//Find the colour of the dot
if (matrix.get_Renamed(x, y) == -1)
{
img.SetPixel(x, y, Color.White);
}
else
{
img.SetPixel(x, y, Color.Black);
}
}
}
img.Save(filename, ImageFormat.Png);
}
The printed barcodes work very well and fast with the integrated WP7 bing scan&search.
When I try to scan the very same printed qrcodes with Stéphanie Hertrichs sample app, scanning is very slow, most do not scan at all, or will only be recognized when I slowly rotate the camera around.
How do I get my scanning to be as reliable as the integrated barcode recognition? I only need to scan QrCodes, so I disabled all the others, still it does not work most of the time.
Is there maybe some other barcode scanning library which is working better?
The silverlight port in Stéphanie Hertrichs sample app is very old. It seems to me that the project at codeplex isn't maintained anymore since more then 1 year. You should try one of the newer and maintained ports like ZXing.Net
zxing works very well -- just try it on Android. I would not be surprised if it is what powers the Bing search.
The problems are likely in the port. Any non-Java port is at best old and incomplete. I also can't speak to the efficiency of the approach used in the sample you are looking at. For example, is it really binarizing the image from the APIs correctly? Also make sure it is not using TRY_HARDER mode.
There is no objective answer to this question...
My personal opinion is that the ZXing lib that you tried (Stéphanie Hertrichs sample app) is the best you can get. As far as I know it is used on the other plattforms, too (e.g. Android).
As I tested the lib a few months ago, I had the impression it worked very reliable and quick, but it may be that you had other circumstances (lighting, camera, angle, etc...)

How to get correct hDevMode values from CPrintDialogEx (PrintDlgEx)?

I'm displaying a CPrintDialogEx dialog to choose a printer and modify the settings. I set the hDevNames member so that a default printer will be selected, but I leave hDevMode set to NULL. On successful return I pull some values such as paper size out of the returned DEVMODE structure from hDevMode.
I'm having a problem because hDevMode appears to be initialized with the values from the default printer that I passed in, not the printer that was finally selected. How do I get the parameters from the actual selected printer?
As requested here's the relevant part of the code. I've deleted some of it in the interest of space. TOwnedHandle is a smart pointer I wrote for holding a memory handle and locking it automatically.
CPrintDialogEx dlg(PD_ALLPAGES | PD_NOCURRENTPAGE | PD_NOPAGENUMS | PD_NOSELECTION, this);
ASSERT(dlg.m_pdex.hDevMode == NULL);
ASSERT(dlg.m_pdex.hDevNames == NULL);
dlg.m_pdex.hDevNames = GlobalAlloc(GHND, sizeof(DEVNAMES) + iSizeName);
DEVNAMES * pDevNames = (DEVNAMES *) GlobalLock(dlg.m_pdex.hDevNames);
// ...
GlobalUnlock(dlg.m_pdex.hDevNames);
if ((dlg.DoModal() == S_OK) && (dlg.m_pdex.dwResultAction == PD_RESULT_PRINT))
{
TOwnedHandle<DEVMODE> pDevMode = dlg.m_pdex.hDevMode;
TRACE("Printer config = %dx%d %d\n", (int)pDevMode->dmPaperWidth, (int)pDevMode->dmPaperLength, (int)pDevMode->dmOrientation);
// ...
}
Edit: I've determined that I don't get the problem if I don't set the hDevNames parameter. I wonder if I've discovered a Windows bug? This is in XP, I don't have a more recent version of Windows handy to test with.
I've distilled the code into a test that doesn't use MFC, this is strictly a Windows API problem. This is the whole thing, nothing left out except the definition of pDefaultPrinter - but of course it doesn't do anything useful anymore.
PRINTDLGEX ex = {sizeof(PRINTDLGEX)};
ex.hwndOwner = m_hWnd;
ex.Flags = PD_ALLPAGES | PD_NOCURRENTPAGE | PD_NOPAGENUMS | PD_NOSELECTION;
ex.nStartPage = START_PAGE_GENERAL;
#if 1
int iSizeName = (strlen(pDefaultPrinter) + 1) * sizeof(char);
ex.hDevNames = GlobalAlloc(GHND, sizeof(DEVNAMES) + iSizeName);
DEVNAMES * pDevNames = (DEVNAMES *) GlobalLock(ex.hDevNames);
ASSERT(pDevNames != NULL);
pDevNames->wDeviceOffset = sizeof(DEVNAMES);
strcpy((char *)pDevNames + pDevNames->wDeviceOffset, pDefaultPrinter);
GlobalUnlock(ex.hDevNames);
#endif
HRESULT hr = PrintDlgEx(&ex);
if ((hr == S_OK) && (ex.dwResultAction == PD_RESULT_PRINT))
{
DEVMODE * pdm = (DEVMODE *) GlobalLock(ex.hDevMode);
ASSERT(pdm != NULL);
TRACE("Printer config = %dx%d %d\n", (int)pdm->dmPaperWidth, (int)pdm->dmPaperLength, (int)pdm->dmOrientation);
GlobalUnlock(ex.hDevMode);
DEVNAMES * pdn = (DEVNAMES *) GlobalLock(ex.hDevNames);
ASSERT(pdn != NULL);
TRACE(_T("Printer device = %s\n"), (char *)pdn + pdn->wDeviceOffset);
GlobalUnlock(ex.hDevNames);
}
If I can't get a fix, I'd love to hear of a work-around.
After much head scratching I think I've figured it out.
When the dialog comes up initially, the hDevMode member gets filled with the defaults for the printer that is initially selected. If you select a different printer before closing the dialog, that DEVMODE structure is presented to the new printer driver; if the paper size doesn't make sense to the driver it may change it, and the drivers are not consistent.
The reason this tripped me up is that I was switching between three printers: two label
printers with very different characteristics, and a laser printer with US Letter paper.
The laser printer always responds with the proper dimensions but may indicate a wrong paper size code.
The first label printer will override the size provided by the laser printer but not the other label printer.
The second label printer will accept the size provided by the first label printer, because it's capable of using that size even though it's not loaded and not configured. It modifies the size provided by the laser printer by returning the maximum width and the Letter size length of 11 inches.
I determined two ways to work around the problem. The first is to implement IPrintDialogCallback and respond to SelectionChange calls by reloading the default DEVMODE for the newly selected printer. EDIT: I tried this and it does not work. CPrintDialogEx already implements an IPrintDialogCallback interface, making this easy. It appears that PrintDlgEx has its own internal handle that it uses to track the current DEVMODE structure and only uses the one in the PRINTDLGEX structure for input/output. There's no way to affect the DEVMODE while the dialog is up, and by the time it returns it's too late.
The second solution is to ignore the returned results entirely and work from the default paper configuration for the printer. Any changes made from the printer defaults within the dialog are lost completely, but for my application this is acceptable.
bool MyDialog::GetPaperSize(const TCHAR * pPrinterName, double & dPaperWidth, double & dPaperLength)
{
// you need to open the printer before you can get its properties
HANDLE hPrinter;
if (OpenPrinter((TCHAR *)pPrinterName, &hPrinter, NULL))
{
// determine how much space is needed for the DEVMODE structure by the printer driver
int iDevModeSize = DocumentProperties(m_hWnd, hPrinter, (TCHAR *)pPrinterName, NULL, NULL, 0);
ASSERT(iDevModeSize >= sizeof(DEVMODE);
// allocate a DEVMODE structure and initialize it to a clean state
std::vector<char> buffer(iDevModeSize, 0);
DEVMODE * pdm = (DEVMODE *) &buffer[0];
pdm->dmSpecVersion = DM_SPECVERSION;
DocumentProperties(m_hWnd, hPrinter, (TCHAR *)pPrinterName, pdm, NULL, DM_OUT_BUFFER);
ClosePrinter(hPrinter);
// convert paper size from tenths of a mm to inches
dPaperWidth = pdm->dmPaperWidth / 254.;
dPaperLength = pdm->dmPaperLength / 254.;
return true;
}
return false;
}

Resources