I want to implement dataset recording (color frames, depth frames) for Tango device.
There is a config_enable_dataset_recording configuration parameter boolean flag In Tango C api.
So the question is what does config_enable_dataset_recording means?
(hopefully this flag may be helpful for my task)
This feature is currently disabled from the API side, originally, the idea of dataset is to logging data on to a specific disk drive.
Related
I am trying to get programmatically the maximum display rate that Windows allows (i.e, Display settings > Advanced display settings > Refresh rate > max value). Chances are, there's no such query and I instead need to obtain all the possible options. How do I do that ?
I've already obtained the monitor names and current refresh rates using the CCD API, by obtaining DISPLAYCONFIG_PATH_INFOs and by using DisplayConfigGetDeviceInfo. But I can't seem to find a way to obtain the refresh rate options associated to a monitor. A CCD API based solution would be perfect, but an alternative is fine - it just means I'll have to reconcile the information obtained via the CCD API with that obtained from that other API, somehow.
Also, I'm trying to do this in the context of a plain Windows executable, that doesn't use a specific graphics backend library (ex DX12) or game-making framework.
Thanks !
Using the CCD API, you can use DisplayConfigGetDeviceInfo to get the GDI device name using DISPLAYCONFIG_DEVICE_INFO_TYPE::DISPLAYCONFIG_DEVICE_INFO_GET_SOURCE_NAME , usually something like \\.\DISPLAY1, \\.\DISPLAY2, etc.
Once you have that device name, you can use the EnumDisplaySettingsW function to enumerate all DEVMODE for this device, this will give you all possible combination of modes (resolution, frequency, etc.) that the device supports (that can easily return hundreds of modes).
Once you have that you just need to group them by DEVMODE's dmDisplayFrequency field (and sort it).
I am trying to write a small DirectShow application using C++. My main issue is grabbing the raw data from the microphone and saving it as BYTES.
Is this possible using a DirectShow filter? What interface am I supposed to use in order to get the data?
Right now, I am able to achieve writing a recorded AVI file using the following Graph Filter:
Microphone->Avi Mux->File writer
This graph works fine.
I have tried to use the SampleGrabber (which is deprecated by Microsoft) and I have a lack of knowledge regarding what to do with this BaseFilter type.
By design DirectShow topology needs to be complete, starting with source (microphone) and terminating with renderer filter, and data exchange in DirectShow pipelines is private to connected filters, without data exposure to controlling application.
This makes you confused because you apparently want to export content from the pipeline, into outer world. It is not exactly the way DirectShow is designed to work.
The "intended", "DirectShow way" is to develop a custom renderer filter which would connect to the microphone filter and receive its data. More often than not developers prefer to not take this path since developing a custom filter is a sort of complicated.
The popular solution is to build a pipeline Microphone --> Sample Grabber --> Null Renderer. Sample Grabber is a filter which exposes data, which is passed through, using SampleCB callback. Even though it's getting harder with time, you can still find tons of code which do the job. Most developers prefer this path: to build pipeline using ready to use blocks and forget about DirectShow API.
And then one another option would be to not use DirectShow at all. Given its state this API choice is unlucky, you should rather be looking at WASAPI capture instead.
It seems that the current otel specification only allows to make a sampling decision based on the initial attributes.
This is a shame because I'd like to always include some high signal spans. E.g the ones with errors or long durations. These fields are typically only populated before ending a span. I.e. too late for a sampling decision under the current specs.
Is there some other approach to get what I want? Or is it reasonable to open an issue in the repo to discuss allowing this use case?
Some context for my situation:
I'm working on a fairly small project with no dedicated resources for telemetry infrastructure. Instead we're exporting spans directly from our node.js app server to honeycomb and would like to get a more complete picture of errors and long-duration requests while sampling low-signal spans to keep our cost under control.
There are certain ways you could achieve this.
Implementing your own SpanProcessor which filters out these spans. This can get hairy quickly since it breaks the trace and some span might have parentId set to span which is dropped.
Another way to achieve this is to do tail sampling i.e drop the entire trace if it matches certain criteria and there is processor for that in opentelemetry collector contrib https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/tailsamplingprocessor. Please note the agent/gateway deployment of collector which is doing tail sampling has to have the access to full trace and there is also some buffering done.
I think honeycomb also has some component which can be used for sampling the telemetry but I have never used it https://github.com/honeycombio/refinery.
I have created a Custom Image Recognition collection on IBM Cloud and am using it in my Django website to do the processing. However, I noticed that the response time ranges from 6 to 14 seconds.
I want to reduce this turnaround time. I am already zipping the image file that I sent. So when going through the API reference document here on IBM Cloud I noticed that there is a method called "get_model_file" which download the collection file to a local space.
But no documentation on how this can be used. Anyone who has successfully implemented this? Or am i missing something here?
However, I noticed that the response time ranges from 6 to 14 seconds.
I want to reduce this turnaround time. I am already zipping the image file that I sent.
How many images at at time are you sending in the zip file to the /analyze endpoint? If you are just sending one image at a time, you should not bother zipping it. Also, if you can, you should parallelize your code so that you make 1 request per image, rather than sending, say 6 images in a single zip file. This will reduce the latency.
Using the v4 API, by the way, you should resize your images to no more than 300 pixels in either width or height. In fact, you can "squash" the aspect ratio to square and it will not affect the outcome. The service will do this resizing internally anyhow, but if you do it on the client side, you save network transmission and decoding time.
With a single image at a time, if your resolution is under 300x300 pixels, you should have latency under 1.5 seconds on a typical call, including your network transmission time.
As the documentation states
Currently, the model format is specific to Android apps.
So unless you are creating an Android App then this is not going to work for you.
You probably have two areas of latency. First will be from the browser to your Django app. Second will be from your Django app to the Visual Recognition service. I am not sure where you have hosted the Django app, but if you locate it in the same region (data centre would be even better) you might be able to reduce part of the latency.
I have used this :
var watcher = new GeoCoordinateWatcher(GeoPositionAccuracy.Default);
Where Hgh means GPS and Default is the result from any of these : GPS, WiFi
I like to know :
1) Is Default refering to CellTower ID or WIFI ? any different?
2) When I turn on my App running this Watcher inside the building ( not close to any entrance or window) , I can not get any result or no Signal at all from this watcher ?
If default refer to Wifi, it should be able to get the Cell_Tower ID .
If I am closer to the window or entrance of the building, I can get the result from Watcher.
Any1 can help me on this?
Thanks
The GeoPositionAccuracy does not directly refer to use of GPS or not.
The GeoPositionWatcher provides an abstraction layer to the way that the location is resolved internally. You will never know what was used to determine the location. Because the watcher does not indicate how the location was determined, it does not provide access to information based on a specific technology which may or may not have been used in identifying the device's location.
In theory, depending on the location of the device it may be possible to get a "High" accuracy reading based purely on public wifi data.
You cannot get the cell tower ID from any of the available APIs.