GPS Intermediate Driver Constantly Return 0 0 LAT LNG - windows-mobile-6.5

I am confused because I know my GPS Device State and Service State are both ON, yet Satellites in View, and Latitude/Longitude are both blank. I have configured my programs to use port GPD1, and my Hardware uses COM7. This seems to be the correct configuration. Why am I getting nothing returned?

Latitude and Longitude are only filled if there is a Fix (a valid geo position):
// call native method passing in our native buffer
int result = GPSGetPosition(gpsHandle, ptr, 500000, 0);
if (result == 0)
{
// native call succeeded, marshal native data to our managed data
gpsPosition = (GpsPosition)Marshal.PtrToStructure(ptr, typeof(GpsPosition));
if (maxAge != TimeSpan.Zero)
{
// check to see if the data is recent enough.
if (!gpsPosition.TimeValid || DateTime.Now - maxAge > gpsPosition.Time)
{
gpsPosition = null;
}
}
}
For a Fix (a valid position) the GPS needs at least three valid satellites.
See my GPSsample I linked to your other question. It will show some more background information on what is going on with the GPS signals.
A tip: the time-to-first-fix (TTFF) may last up to 15-20 minutes with free view on the sky if there is no alternate EE data source and the GPS module is not moved to much.
The EE data is built from the satellites broadcasts, the data rate is very low. The EE data can also be provided via a internet data connection, but the usage depends on the GPS module used. This EE data specifies the real position of all GPS satellites for two weeks.
If a GPS receiver is switched off and moved ~200km, the location has to be calculated from base (may be another 15 minutes).
In reality, current GPS receivers provide alternate 'feeds' for EE data, for example GPSeXtra (internet data) or MS assisted GPS (cell phone tower ID and location).

Related

What is the correct usage of FrameTiming and FrameTimingManager

I'm trying to log the time the GPU takes to render a frame. To do this I found that Unity implemented a struct FrameTiming, and a class named FrameTimingManager
The FrameTiming struct has a property gpuFrameTime which sounds like exactly what I need, however the value is never set, and the documentation on it doesn't provide much help either
public double gpuFrameTime;
Description
The GPU time for a given frame, in ms.
Looking further I found the FrameTimingManager class which contains a static method for GetGpuTimerFrequency(), which has the not so helpful documentation stating only:
Returns ulong GPU timer frequency for current platform.
Description
This returns the frequency of GPU timer on the current platform, used to interpret timing results. If the platform does not support returning this value it will return 0.
Calling this method in an update loop only ever yields 0 (on both Window 10 running Unity 2019.3 and Android phone running Android 10).
private void OnEnable()
{
frameTiming = new FrameTiming();
}
private void Update()
{
FrameTimingManager.CaptureFrameTimings();
var result = FrameTimingManager.GetGpuTimerFrequency();
Debug.LogFormat("result: {0}", result); //logs 0
var gpuFrameTime = frameTiming.gpuFrameTime;
Debug.LogFormat("gpuFrameTime: {0}", gpuFrameTime); //logs 0
}
So what's the deal here, am I using the FrameTimeManager incorrectly, or are Windows and Android not supported (Unity mentions in the docs that not all platforms are supported, but nowhere do they give a list of supported devices..)?
While grabbing documentation links for the question I stumbled across some forum posts that shed light on the issue, so leaving it here for future reference.
The FrameTimingManager is indeed not supported for Windows, and only has limited support for Android devices, more specifically only for Android Vulkan devices. As explained by jwtan_Unity on the forums here (emphasis mine):
FrameTimingManager was introduced to support Dynamic Resolution. Thus, it is only supported on platforms that support Dynamic Resolution. These platforms are currently Xbox One, PS4, Nintendo Switch, iOS, macOS and tvOS (Metal only), Android (Vulkan only), Windows Standalone and UWP (DirectX 12 only).
Now to be able to use the FrameTimingManager.GetGpuTimerFrequency() we need to do something else first. We need to take a snapshot of the current timings using FrameTimingManager.CaptureFrameTimings first (this needs to be done every frame). From the docs:
This function triggers the FrameTimingManager to capture a snapshot of FrameTiming's data, that can then be accessed by the user.
The FrameTimingManager tries to capture as many frames as the platform allows but will only capture complete timings from finished and valid frames so the number of frames it captures may vary. This will also capture platform specific extended frame timing data if the platform supports more in depth data specifically available to it.
As explained by Timothyh_Unity on the forums hereenter link description here
CaptureFrameTimings() - This should be called once per frame(presuming you want timing data that frame). Basically this function captures a user facing collection of timing data.
So the total code to get the GPU frequency (on a supported device) would be
private void Update()
{
FrameTimingManager.CaptureFrameTimings();
var result = FrameTimingManager.GetGpuTimerFrequency();
Debug.LogFormat("result: {0}", result);
}
Note that all FrameTimingManager methods are static, and do not require you to instantiate a manager first
Why none of this is properly documented by Unity beats me...

How to best organize constant buffers

I'm having some trouble wrapping my head around how to organize the constant buffers in a very basic D3D11 engine I'm making.
My main question is: Where does the biggest performance hit take place? When using Map/Unmap to update buffer data or when binding the cbuffers themselves?
At the moment, I'm deciding between the following two implementations for a sort of "shader-wrapper" class:
Holding an array of 14 ID3D11Buffer*s
class VertexShader
{
...
public:
Bind(context)
{
// Bind all 14 buffers at once
context->VSSetConstantBuffers(0, 14, &m_ppCBuffers[0]);
context->VSSetShader(pVS, nullptr, 0);
}
// Set the data for a buffer in a particular slot
SetData(slot, size, pData)
{
D3D11_MAPPED_SUBRESOURCE mappedBuffer = {};
context->Map(buffers[slot], 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedBuffer);
memcpy(mappedBuffer.pData, pData, size);
context->Unmap(buffers[slot], 0);
}
private:
ID3D11Buffer* buffers[14];
ID3D11VertexShader* pVS;
}
This approach would have the shader bind all the cbuffers in a single batch of 14. If the shader has cbuffers registered to b0, b1, b3 the array would look like -> [cb|cb|0|cb|0|0|0|0|0|0|0|0|0|0]
Constant Buffer wrapper that knows how to bind itself
class VertexShader
{
...
public:
Bind(context)
{
// all the buffers bind themselves
for(auto cb : bufferMap)
cb->Bind(context);
context->VSSetShader(pVS, nullptr, 0);
}
// Set the data for a buffer with a particular ID
SetData(std::string, size, pData)
{
// table lookup into bufferMap, then Map/Unmap
}
private:
std::unordered_map<std::string, ConstantBuffer*> bufferMap;
ID3D11VertexShader* pVS;
}
This approach would hold "ConstantBuffers" in a hash table, each one would know what slot it's bound to and how to bind itself to the pipeline. I would have to call VSSetConstantBuffers() individually for each cbuffer since the ID3D11Buffer*s wouldn't be contiguous anymore, but the organization is friendlier and has a bit less wasted space.
How would you typically organize the relationship between CBuffers, Shaders, SRVs, etc? Not looking for a do-all solution, but some general advice and things to read more about from people hopefully more experienced than I am
Also if #Chuck Walbourn sees this, I'm a fan of your work and using DXTK/WiCTextureLoader for this project!
Thanks.
Constant Buffers were a major feature of Direct3D 10, so one of the best talks on the subject was given way back at Gamefest 2007:
Windows to Reality: Getting the Most out of Direct3D 10 Graphics in Your Games
See also Why Can Updating Constant Buffers be so painfully slow? (NVIDIA)
The original intention was for CBs to be organized by frequency of update: something like one CB for stuff that is set 'per level', another for stuff 'per frame', another for 'per object', another 'per pass' etc. Therefore the assumption is that if you changed any part of a CB, you were going to be uploading the whole thing. Bandwdith between the CPU and GPU is the real bottleneck here.
For this approach to be effective, you basically need to set up all your shaders to use the same scheme. This can be difficult to manage, especially when so many modern material systems are art-driven.
Another approach to CBs is to use them like a dynamic VB for particles submission where you fill it up with short-lived constants, submit work, and then reset the thing each frame. This approach is basically what people do for DirectX 12 in many cases. The problem is that without the ability to update parts of CBs, it's too slow. The "partial constant buffer updates and offsets' optional features in DirectX 11.1 were a way to make this work. That said, this feature is not supported on Windows 7 and is 'optional' on newer versions of Windows, so you have to support two codepaths to use it.
TL;DR: you can technically have a lot of CBs bound at once, but the key thing is to keep the individual size small for the ones that change often. Also assume any change to a CB is going to require updating the whole thing to the GPU every time you do change it.

Native way to get the feature report descriptor of HID device?

We have some HID devices (touch digitizers) that communicate with an internal R&D tool. This tool parses the raw feature reports from the devices to draw the touch reports along with some additional data that are present in the raw feature report but filtered out by the HID driver of Windows 7 (eg, pressure data is not present in WM_TOUCH messages).
However, we have started working with some devices that may have different firmware variants, and thus that do not share the same ordering or bytelength of the fields and I need to modify our R&D tool so that it will adapt transparently to all the devices.
The devices come from the same manufacturer (ourselves) and share the same device info, so using these fields to differentiate between the different firmwares is not an option. What I would like to do is to get the HID feature report descriptor sent by the device and update dynamically our feature report parsing method based on this information.
However, I didn't manage to find the correct method to call in order to get this descriptor when browsing the Windows API. What I have found so far is the Raw Input page on MSDN, but I'm not sure what to do next. Can I find the required information in the RID_DEVICE_HID structure ? Or do I need to call a completely different API ?
Thanks in advance for your help!
Ok, finally I've got something (almost completely) functional. As inferred by mcoill, I used the HidP_xxx() family of functions, but it needs a little bit of data preparation first.
I based my solution on this example code that targets USB joysticks and adapted it to touch digitizer devices.
If someone else also gets confused by the online doc, here are the required steps involved in the process:
registering the application for a Raw Input device at launch.
This is done by calling the function RegisterRawInputDevice(&Rid, 1, sizeof(Rid)), where Rid is a RAWINPUTDEVICE with the following properties set (in order to get a touch digitizer) :
Rid.usUsage = 0x04;
Rid.usUsagePage = 0x0d;
Rid.dwFlags = RIDEV_INPUT_SINK;
registering a callback OnInput(LPARAM lParam) for the events WM_INPUT since the Rid device will generate this type of events;
the OnInput(LPARAM lParam) method will get the data from this event in two steps:
// Parse the raw input header to read its size.
UINT bufferSize;
GetRawInputData(HRAWINPUT)lParam, RID_INPUT, NULL, &bufferSize, sizeof(RAWINPUTHEADER));
// Allocate memory for the raw input data and retrieve it
PRAWINPUT = (PRAWINPUT)HeapAlloc(GetProcessHeap(), 0, bufferSize);
GetRawInputData(HRAWINPUT)lParam, RID_INPUT, rawInput /* NOT NULL */, &bufferSize, sizeof(RAWINPUTHEADER));
it then calls a parsing method that creates the HIDP_PREPARSED_DATA structure required by the lookup functions:
// Again, read the data size, allocate then retrieve
GetRawInputDeviceInfo(rawInput->header.hDevice, RIDI_PREPARSEDDATA, NULL, &bufferSize);
PHIDP_PREPARSED_DATA preparsedData = (PHIDP_PREPARSED_DATA)HeapAlloc(heap, 0, bufferSize);
GetRawInputDeviceInfo(rawInput->header.hDevice, RIDI_PREPARSEDDATA, preparsedData, &bufferSize);
The preparsed data is split into capabilities:
// Create a structure that will hold the values
HidP_GetCaps(preparsedData, &caps);
USHORT capsLength = caps.NumberInputValueCaps;
PHIDP_VALUE_CAPS valueCaps = (PHIDP_VALUE_CAPS)HeapAlloc(heap, 0, capsLength*sizeof(HIDP_VALUE_CAPS));
HidP_GetValueCaps(HidP_Input, valueCaps, &capsLength, preparsedData);
And capabilities can be asked for their value:
// Read sample value
HidP_GetUsageValue(HidP_Input, valueCaps[i].UsagePage, 0, valueCaps[i].Range.UsageMin, &value, preparsedData, (PCHAR)rawInput->data.hid.bRawData, rawInput->data.hid.dwSizeHid);
Wouldn't HidP_GetPReparsedData(...), HidP_GetValueCaps(HidP_Feature, ...) and their ilk give you enough information without having to get the raw feature report?
HIDClass Support Routines on MSDN

How many buffers is enough in ALLOCATOR_PROPERTIES::cBuffers?

I'm working on a custom video transform filter derived from CTransformFilter. It doesn't do anything unusual in DirectShow terms such as extra internal buffering of media samples, queueing of output samples or dynamic format changes.
Graphs in graphedit containing two instance of my filters connected end to end (output of first connected to input of the second) hang when play is pressed. The graph definitely is not hanging within the ::Transform method override. The second filter instance is not connected directly to a video renderer.
The problem doesn't happen if a colour converter is inserted between the two filters. If I increase the number of buffers requested (ALLOCATOR_PROPERTIES::cBuffers) from 1 to 3 then the problem goes away. The original DecideBufferSize override is below and is similar to lots of other sample DirectShow filter code.
What is a robust policy for setting the number of requested buffers in a DirectShow filter (transform or otherwise)? Is code that requests one buffer out of date for modern requirements? Is my problem too few buffers or is increasing the number of buffers masking a different problem?
HRESULT MyFilter::DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pProp)
{
AM_MEDIA_TYPE mt;
HRESULT hr = m_pOutput->ConnectionMediaType(&mt);
if (FAILED(hr)) {
return hr;
}
BITMAPINFOHEADER * const pbmi = GetBitmapInfoHeader(mt);
pProp->cbBuffer = DIBSIZE(*pbmi);
if (pProp->cbAlign == 0) {
pProp->cbAlign = 1;
}
if (pProp->cBuffers == 0) {
pProp->cBuffers = 3;
}
// Release the format block.
FreeMediaType(mt);
// Set allocator properties.
ALLOCATOR_PROPERTIES Actual;
hr = pAlloc->SetProperties(pProp, &Actual);
if (FAILED(hr)) {
return hr;
}
// Even when it succeeds, check the actual result.
if (pProp->cbBuffer > Actual.cbBuffer) {
return E_FAIL;
}
return S_OK;
}
There is no specific policy on amount of buffers, though you should definitely be aware that fixed number of buffers is the method to control sample rate. When all buffers are in use, a request for another buffer will block execution until such buffer is available.
That is, if your code is holding buffer references for certain purpose, you should allocate the respective amount so that you don't lock yourself. E.g. you hold last media sample reference internally, e.g. to be able to re-send it, and you still want to be able to deliver other media samples, so you need at least two buffers on the allocator.
Output pin is typically responsible to choose and set up the allocator, and input might might need to check and update properties if/when it is notified which allocator is to be used. On inplace transformation filters when you share the allocators, you might want an additional check in order to make sure the requirements are met.
DMO Wrapper Filter uses (at least sometimes) allocators with one buffer only and is still in good standing
with audio you normally have more buffers because you queue data for playback
if you have a reference leak on your code and you don't release media sample pointers, then your streaming might lock dead because of this

How do you retrieve stylus pressure information on windows?

Is anyone aware of a sane way to get tablet/stylus pressure information on Windows?
It's possible to distinguish stylus from mouse with ::GetMessageExtraInfo, but you can't get any more information beyond that. I also found the WinTab API in a out of the way corner of the Wacom site, but that's not part of windows as far as i can tell, and has a completely distinct event/messaging system from the message queue.
Given all I want is the most basic pressure information surely there is a standard Win32/COM API, is anyone aware of what it might be?
The current way to do this is to handle WM_POINTERnnn msgs.
Note this is for Win 8 and later.
Note you will get these msgs for touch AND pen, so you'll need to know the pointerType in order to test for pen. The WPARAM received by a WNDPROC for WM_POINTERnnnn msgs such a WM_POINTERUPDATE and other msgs contains the pointer id which you will need in order to request more info. Empirically I found that WM_POINTERUPDATE results in info that contains pressure data whereas if the pointer flags indicate down/up there is no pressure info.
const WORD wid = GET_POINTERID_WPARAM(wParam);
POINTER_INFO piTemp = {NULL};
GetPointerInfo(wid, &piTemp);
if (piTemp.pointerType == PT_PEN
{
UINT32 entries = 0;
UINT32 pointers = 0;
GetPointerFramePenInfoHistory(wid, &entries, &pointers, NULL); // how many
// TODO, allocate space needed for the info, process the data in a loop to retrieve it, test pointerInfo.pointerFlags for down/up/update.
}
Once you know you are dealing with pen, you can get the pressure info from the POINTER_PEN_INFO struct.
This is similar to handling touch although for touch you'd want gesture recognition and inertia. There is a Microsoft sample illustrating using these functions.
It's part of a Build talk:
https://channel9.msdn.com/Events/Build/2013/4-022
You need to use the Tablet PC Pen/Ink API. The COM version of the API lives in InkObj.dll. Here is a starting point for documentation: http://msdn.microsoft.com/en-us/library/ms700664.aspx
If I remember correctly, InkObj.dll is available on Windows XP SP2 and all later Windows client OSes, regardless of whether the machine is a Tablet PC.
UPDATE:
It's been a number of years since I initially provided this answer, but wintab has become the de facto standard, and Ntrig more or less folded, eventually building a wrapper to allow for the wintab API to be accessed via this digitizer.
(http://www.tabletpcbuzz.com/showthread.php?37547-N-trig-Posts-WinTAB-Support-Driver)
This is a pretty late response, but recently my wife and I purchased a Dell XT tablet PC, which as it turns out actually uses NTrig, a suite of interfaces that utilize Ink, the accepted new windows API that shipped with Windows XP Tablet edition, then SP 2 and all versions thereafter.
A lot of Wacom tablets and others use the Wintab API, which is not currently open nor really permitted to use. From what I hear the folks who maintain it are pretty sue-happy.
So it depends on what type of tablet you're using, and the drivers you have installed for it. In my biased opinion, you should work with Ink, as it provides (or at least through NTrig and Windows 7 WILL provide) multi-touch capability and will likely be the new standard for tablet interfaces. But as of now, NTrig devices do not translate their pressure and angle information to common Wintab-based applications, such as Photoshop or Corel Painter. The applications tend to require at least some support for Microsoft's Tablet API in order to function properly.
If using UWP Windows Runtime then it's quite straightforward. The PointerEventArgs event seems to have all necessary data.
Modified Core App (C++/WinRT) template project snippet from Visual Studio 2019:
void OnPointerMoved(IInspectable const &, PointerEventArgs const &args)
{
if (m_selected)
{
float2 const point = args.CurrentPoint().Position();
m_selected.Offset(
{
point.x + m_offset.x,
point.y + m_offset.y,
0.0f
});
// (new!) Change sprite color based on pen pressure and tilt
auto sprite = m_selected.as<SpriteVisual>();
auto const props = args.CurrentPoint().Properties();
auto const pressure = props.Pressure();
auto const orientation = props.Orientation() / 360.0f;
auto const tiltx = (props.XTilt() + 90) / 180.0f;
auto const tilty = (props.YTilt() + 90) / 180.0f;
Compositor compositor = m_visuals.Compositor();
sprite.Brush(compositor.CreateColorBrush({
(uint8_t)(pressure * 0xFF),
(uint8_t)(tiltx * 0xFF),
(uint8_t)(tilty * 0xFF),
(uint8_t)(orientation * 0xFF)
}));
}
}
Similar code will likely work in C#, JavaScript, etc.

Resources