I am receiving GPS data from an M210v2 drone using the OSDK 4.0.1, and I have several questions about the GPSDetail struct:
typedef struct GPSDetail
{
float32_t hdop; /*!< horizontal dilution of precision */
float32_t pdop; /*!< position dilution of precision */
float32_t fix; /*!< the state of GPS fix */
float32_t gnssStatus; /*!< vertical position accuracy (mm) */
float32_t hacc; /*!< horizontal position accuracy (mm) */
float32_t sacc; /*!< the speed accuracy (cm/s) */
uint32_t usedGPS; /*!< the number of GPS satellites used for pos fix */
uint32_t usedGLN; /*!< the number of GLONASS satellites used for pos fix */
uint16_t NSV; /*!< the total number of satellites used for pos fix */
uint16_t GPScounter; /*!< the accumulated times of sending GPS data */
} GPSDetail; // pack(1)
The only information that I could find on the values of the "fix" field is this 2 year old post:
0x00:no fix
0x01:dead reckoning only
0x02:2D-fix
0x03:3D-fix
0x04:GPS+dead reckoning combined
0x05:Time only fix
Is this still accurate? If so, what does a "time only fix" mean, and can I use the latitude/longitude estimates in that case?
Precisely what kind of accuracy measures are given in gnssStatus, hacc and sacc? Are these RMS, 2DRMS, CEP, R95, SEP, or some other statistic? ref
What are the units of HDOP and PDOP?
Is NSV always the sum of usedGPS and usedGLN?
Related
I am experiencing artifacts on the right edge of scaled and converted images when converting into planar YUV pixel formats with sw_scale. I am reasonably sure (although I can not find it anywhere in the documentation) that this is because sw_scale is using an optimization for 32 byte aligned lines, in the destination. However I would like to turn this off because I am using sw_scale for image composition, so even though the destination lines may be 32 byte aligned, the output image may not be.
Example.
Full output frame is 1280x720 yuv422p10le. (this is 32 byte aligned)
However into the top left corner I am scaling an image with an outwidth of 1280 / 3 = 426.
426 in this format is not 32 byte aligned, but I believe sw_scale sees that the output linesize is 32 byte aligned and overwrites the width of 426 putting garbage in the next 22 bytes of data thinking this is simply padding when in my case this is displayable area.
This is why I need to actually disable this optimization or somehow trick sw_scale into believing it does not apply while keeping intact the way the program works, which is otherwise fine.
I have tried adding extra padding to the destination lines so they are no longer 32 byte aligned,
this did not help as far as I can tell.
Edit with code Example. Rendering omitted for ease of use.
Also here is a similar issue, unfortunately as I stated there fix will not work for my use case. https://github.com/obsproject/obs-studio/pull/2836
Use the commented line of code to swap between a output width which is and isnt 32 byte aligned.
#include "libswscale/swscale.h"
#include "libavutil/imgutils.h"
#include "libavutil/pixelutils.h"
#include "libavutil/pixfmt.h"
#include "libavutil/pixdesc.h"
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
/// Set up a 1280x720 window, and an item with 1/3 width and height of the window.
int window_width, window_height, item_width, item_height;
window_width = 1280;
window_height = 720;
item_width = (window_width / 3);
item_height = (window_height / 3);
int item_out_width = item_width;
/// This line sets the item width to be 32 byte aligned uncomment to see uncorrupted results
/// Note %16 because outformat is 2 bytes per component
//item_out_width -= (item_width % 16);
enum AVPixelFormat outformat = AV_PIX_FMT_YUV422P10LE;
enum AVPixelFormat informat = AV_PIX_FMT_UYVY422;
int window_lines[4] = {0};
av_image_fill_linesizes(window_lines, outformat, window_width);
uint8_t *window_planes[4] = {0};
window_planes[0] = calloc(1, window_lines[0] * window_height);
window_planes[1] = calloc(1, window_lines[1] * window_height);
window_planes[2] = calloc(1, window_lines[2] * window_height); /// Fill the window with all 0s, this is green in yuv.
int item_lines[4] = {0};
av_image_fill_linesizes(item_lines, informat, item_width);
uint8_t *item_planes[4] = {0};
item_planes[0] = malloc(item_lines[0] * item_height);
memset(item_planes[0], 100, item_lines[0] * item_height);
struct SwsContext *ctx;
ctx = sws_getContext(item_width, item_height, informat,
item_out_width, item_height, outformat, SWS_FAST_BILINEAR, NULL, NULL, NULL);
/// Check a block in the normal region
printf("Pre scale normal region %d %d %d\n", (int)((uint16_t*)window_planes[0])[0], (int)((uint16_t*)window_planes[1])[0],
(int)((uint16_t*)window_planes[2])[0]);
/// Check a block in the corrupted region (should be all zeros) These values should be out of the converted region
int corrupt_offset_y = (item_out_width + 3) * 2; ///(item_width + 3) * 2 bytes per component Y PLANE
int corrupt_offset_uv = (item_out_width + 3); ///(item_width + 3) * (2 bytes per component rshift 1 for horiz scaling) U and V PLANES
printf("Pre scale corrupted region %d %d %d\n", (int)(*((uint16_t*)(window_planes[0] + corrupt_offset_y))),
(int)(*((uint16_t*)(window_planes[1] + corrupt_offset_uv))), (int)(*((uint16_t*)(window_planes[2] + corrupt_offset_uv))));
sws_scale(ctx, (const uint8_t**)item_planes, item_lines, 0, item_height,window_planes, window_lines);
/// Preform same tests after scaling
printf("Post scale normal region %d %d %d\n", (int)((uint16_t*)window_planes[0])[0], (int)((uint16_t*)window_planes[1])[0],
(int)((uint16_t*)window_planes[2])[0]);
printf("Post scale corrupted region %d %d %d\n", (int)(*((uint16_t*)(window_planes[0] + corrupt_offset_y))),
(int)(*((uint16_t*)(window_planes[1] + corrupt_offset_uv))), (int)(*((uint16_t*)(window_planes[2] + corrupt_offset_uv))));
return 0;
}
Example Output:
//No alignment
Pre scale normal region 0 0 0
Pre scale corrupted region 0 0 0
Post scale normal region 400 400 400
Post scale corrupted region 512 36865 36865
//With alignment
Pre scale normal region 0 0 0
Pre scale corrupted region 0 0 0
Post scale normal region 400 400 400
Post scale corrupted region 0 0 0
I believe sw_scale sees that the output linesize is 32 byte aligned and overwrites the width of 426 putting garbage in the next 22 bytes of data thinking this is simply padding when in my case this is displayable area.
That's actually correct, swscale indeed does that, good analysis. There's two ways to get rid of this:
disable all SIMD code using av_set_cpu_flags_mask(0).
write the re-scaled 426xN image in a temporary buffer and then manually copy the pixels into the unpadded destination plane.
The reason ffmpeg/swscale overwrite the destination is for performance. If you don't care about runtime and want the simplest code, use the first solution. If you do want performance and don't mind slightly more complicated code, use the second solution.
I am having a problem between sending a float trough the UART to be plotted in a graph on the Data Visualizer of Microchip.
I could plot int numbers without problem, but float ones are driving me crazy.
I made a sine wave with Laplace trnasform. After that put it on the 'z' plane with the bilineal z transform, then put the equation in the main routine of a dsPIC33FJ128GP802. It is working ok. In the terminal I can see the values and if I copy/paste those values on gnumeric and make a graph, it shows me my discrete sine wave.
The problem comes when I try to plot the float number 'yn' in the data visualizer of the MPLABX. There is something I am missing in the middle.
I am using MPLABX v5.45, XC16 v1.61 on Debian Bullseye. The communication with the microcontroller is transparent #9600-8N1.
Here is my main code:
int main(void)
{
InitClock(); // This is the PLL settings
Init_UART1();// This is the UART Init values for 9600-8-N-1
float states[6] = {0,0,0,0,0,0};
// states [xn-2 xn-1 xn yn yn-1 yn-2]
xn = 1.0; //the initial value
while (1)
{
yn = 1.9842*yn1-yn2+0.0013*xn1+0.0013*xn2; // equation for the sine wave
yn2 = yn1;
yn1 = yn;
xn2 = xn1;
xn1 = xn;
putc(0x03,stdout);
//Here I want to send the xn to plot in MDV
putc(0xFC,stdout);
}
}
The variables in the equation
yn = 1.9842*yn1-yn2+0.0013*xn1+0.0013*xn2;
are with #define like this
#define xn states[2]
#define xn1 states[1]
#define xn2 states[0]
#define yn states[3]
#define yn1 states[4]
#define yn2 states[5]
The WriteUART1(0x03); and the WriteUART1(0xFC); are for Data Visualizer to see the first byte and the last byte. It is like the example on the Microchip video.
The question is: How can I manage the float yn to be plot by the Microchip Data Visualizer.
Thanks in advance.
Ok, here is the answer.
A float number it is 32 bits long but you can't manage them bit by bit like int ones. So the way is to manage like a char.
You have to make a pointer to a char, assign the address of the float to the pointer (casting the address, because char pointer isn't the same as float pointer). Then just send 4 bytes incrementing the char pointer.
Here is the code:
while (1)
{
yn = 1.9842 * yn1 - yn2 + 0.0013 * xn1 + 0.0013 * xn2; // sine recursive equation
yn2 = yn1;
yn1 = yn;
xn2 = xn1;
xn1 = xn;
ptr = (char *) & yn; // This is the char pointer ptr saving the address of yn by casting it to char*, because &yn is float*
putc(0x03,stdout); // the start frame for MPLABX Data Visualizer
for (x = 0; x < sizeof(yn) ; x++) // with the for we go around the four bytes of the float
putc(*ptr++,stdout); // we send every byte to the UART
putc(0xFC,stdout); // the end frame for MPLABX Data Visualizer.
}
With this working, you have to config the data visualizer, your baudrate, and then select new streaming variable. You select a name, then Framing Mode you select One's complement, the start frame in this case 0x03 and the end frame 0xFC. Just name the variable and then type float32, press next, plot variable, finish and you have the variable in the MPLABX time plotter.
Here is the image of the plot
Hope, this helps someone.
Regards.-
This is just a dummy confirmation question but I really want to make sure the values I receive from calling the function XQueryPointer are in pixels for the X and Y screen coordinates.
extern Bool XQueryPointer(
Display* /* display */,
Window /* w */,
Window* /* root_return */,
Window* /* child_return */,
int* /* root_x_return */,
int* /* root_y_return */,
int* /* win_x_return */,
int* /* win_y_return */,
unsigned int* /* mask_return */
);
This is because I will need to perform some operations once I get the whole resolution of my screen using the following functions, i.e. I'll filter out some pixels from the entire screen but need to know the values returned by XQueryPointer are also pixels.
xVal = DisplayWidth(display, screen_number);
yVal = DisplayHeight(display, screen_number);
I'm assuming root_x_return and root_y_return are in pixels. Am I correct?
Yes. From the Xlib documentation:
Each window and pixmap has its own coordinate system. The coordinate system has the X axis horizontal and the Y axis vertical with the origin [0, 0] at the upper-left corner. Coordinates are integral, in terms of pixels, and coincide with pixel centers. For a window, the origin is inside the border at the inside, upper-left corner.
Scenario:
I m experimenting the thermocouple amplifier (SN-6675) with Arduino DUE.
After i'm included MAX6675 library, Arduino can measured room temperature.
However, Temp measured from arduino have 2 issues,
1) offset compare to "Fluke thermometer"
2) have tons of noise, and keep fluctuated after taking average of each 5 temperature sample.
eg, Fluke thermometer got 28.9C at room temp, arduino got 19.75~45.75C
Question: Any method/filter to reduce the measured noise, and gives a steady output?
Codes is attached for reference.
#include <MAX6675.h>
//TCamp Int
int CS = 7; // CS pin on MAX6675
int SO = 8; // SO pin of MAX6675
int SCKpin = 6; // SCK pin of MAX6675
int units = 1; // Units to readout temp (0 = ˚F, 1 = ˚C)
float error = 0.0; // Temperature compensation error
float tmp = 0.0; // Temperature output variable
//checking
int no = 0;
MAX6675 temp0(CS,SO,SCKpin,units,error); // Initialize the MAX6675 Library for our chip
void setup() {
Serial.begin(9600); // initialize serial communications at 9600 bps:
}
void loop() {
no= no + 1;
tmp = temp0.read_temp(5); // Read the temp 5 times and return the average value to the var
Serial.print(tmp);
Serial.print("\t");
Serial.println(no);
delay(1000);
}
Any method/filter to reduce the measured noise, and gives a steady output?
The Kalman filter is pretty much the standard method for this:
Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone.
If your background isn't maths, don't be put off by the formulas that you come across. In the single-variable case like yours, the filter is remarkably easy to implement, and I am sure googling will find a few implementations.
The filter will give you an estimate of the temperature as well an estimate of the variance of the temperature (the latter gives you an idea about how confident the filter is about its current estimate).
You may want to go for a simpler averaging algorithm. This is not as elegant as a low pass algorithm but may be adequate for your case. These algorithms are plentiful on the web.
You can monkey around with the number of samples you take to balance the compromise between latency and stability. You may want to start with 10 samples and work from there.
I get the general idea that the frame.data[] is interpreted depending on which pixel format is the video (RGB or YUV). But is there any general way to get all the pixel data from the frame? I just want to compute the hash of the frame data, without interpret it to display the image.
According to AVFrame.h:
uint8_t* AVFrame::data[AV_NUM_DATA_POINTERS]
pointer to the picture/channel planes.
int AVFrame::linesize[AV_NUM_DATA_POINTERS]
For video, size in bytes of each picture line.
Does this mean that if I just extract from data[i] for linesize[i] bytes then I get the full pixel information about the frame?
linesize[i] contains stride for the i-th plane.
To obtain the whole buffer, use the function from avcodec.h
/**
* Copy pixel data from an AVPicture into a buffer, always assume a
* linesize alignment of 1. */
int avpicture_layout(const AVPicture* src, enum AVPixelFormat pix_fmt,
int width, int height,
unsigned char *dest, int dest_size);
Use
int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
to calculate the required buffer size.
avpicture_* API is deprecated. Now you can use av_image_copy_to_buffer() and av_image_get_buffer_size() to get image buffer.
You can also avoid creating new buffer memory like above (av_image_copy_to_buffer()) by using AVFrame::data[] with the size of each array/plane can be get from av_image_fill_plane_sizes(). Only do this if you clearly understand the pixel format.
Find more here: https://www.ffmpeg.org/doxygen/trunk/group__lavu__picture.html