I am having a problem between sending a float trough the UART to be plotted in a graph on the Data Visualizer of Microchip.
I could plot int numbers without problem, but float ones are driving me crazy.
I made a sine wave with Laplace trnasform. After that put it on the 'z' plane with the bilineal z transform, then put the equation in the main routine of a dsPIC33FJ128GP802. It is working ok. In the terminal I can see the values and if I copy/paste those values on gnumeric and make a graph, it shows me my discrete sine wave.
The problem comes when I try to plot the float number 'yn' in the data visualizer of the MPLABX. There is something I am missing in the middle.
I am using MPLABX v5.45, XC16 v1.61 on Debian Bullseye. The communication with the microcontroller is transparent #9600-8N1.
Here is my main code:
int main(void)
{
InitClock(); // This is the PLL settings
Init_UART1();// This is the UART Init values for 9600-8-N-1
float states[6] = {0,0,0,0,0,0};
// states [xn-2 xn-1 xn yn yn-1 yn-2]
xn = 1.0; //the initial value
while (1)
{
yn = 1.9842*yn1-yn2+0.0013*xn1+0.0013*xn2; // equation for the sine wave
yn2 = yn1;
yn1 = yn;
xn2 = xn1;
xn1 = xn;
putc(0x03,stdout);
//Here I want to send the xn to plot in MDV
putc(0xFC,stdout);
}
}
The variables in the equation
yn = 1.9842*yn1-yn2+0.0013*xn1+0.0013*xn2;
are with #define like this
#define xn states[2]
#define xn1 states[1]
#define xn2 states[0]
#define yn states[3]
#define yn1 states[4]
#define yn2 states[5]
The WriteUART1(0x03); and the WriteUART1(0xFC); are for Data Visualizer to see the first byte and the last byte. It is like the example on the Microchip video.
The question is: How can I manage the float yn to be plot by the Microchip Data Visualizer.
Thanks in advance.
Ok, here is the answer.
A float number it is 32 bits long but you can't manage them bit by bit like int ones. So the way is to manage like a char.
You have to make a pointer to a char, assign the address of the float to the pointer (casting the address, because char pointer isn't the same as float pointer). Then just send 4 bytes incrementing the char pointer.
Here is the code:
while (1)
{
yn = 1.9842 * yn1 - yn2 + 0.0013 * xn1 + 0.0013 * xn2; // sine recursive equation
yn2 = yn1;
yn1 = yn;
xn2 = xn1;
xn1 = xn;
ptr = (char *) & yn; // This is the char pointer ptr saving the address of yn by casting it to char*, because &yn is float*
putc(0x03,stdout); // the start frame for MPLABX Data Visualizer
for (x = 0; x < sizeof(yn) ; x++) // with the for we go around the four bytes of the float
putc(*ptr++,stdout); // we send every byte to the UART
putc(0xFC,stdout); // the end frame for MPLABX Data Visualizer.
}
With this working, you have to config the data visualizer, your baudrate, and then select new streaming variable. You select a name, then Framing Mode you select One's complement, the start frame in this case 0x03 and the end frame 0xFC. Just name the variable and then type float32, press next, plot variable, finish and you have the variable in the MPLABX time plotter.
Here is the image of the plot
Hope, this helps someone.
Regards.-
Related
I am experiencing artifacts on the right edge of scaled and converted images when converting into planar YUV pixel formats with sw_scale. I am reasonably sure (although I can not find it anywhere in the documentation) that this is because sw_scale is using an optimization for 32 byte aligned lines, in the destination. However I would like to turn this off because I am using sw_scale for image composition, so even though the destination lines may be 32 byte aligned, the output image may not be.
Example.
Full output frame is 1280x720 yuv422p10le. (this is 32 byte aligned)
However into the top left corner I am scaling an image with an outwidth of 1280 / 3 = 426.
426 in this format is not 32 byte aligned, but I believe sw_scale sees that the output linesize is 32 byte aligned and overwrites the width of 426 putting garbage in the next 22 bytes of data thinking this is simply padding when in my case this is displayable area.
This is why I need to actually disable this optimization or somehow trick sw_scale into believing it does not apply while keeping intact the way the program works, which is otherwise fine.
I have tried adding extra padding to the destination lines so they are no longer 32 byte aligned,
this did not help as far as I can tell.
Edit with code Example. Rendering omitted for ease of use.
Also here is a similar issue, unfortunately as I stated there fix will not work for my use case. https://github.com/obsproject/obs-studio/pull/2836
Use the commented line of code to swap between a output width which is and isnt 32 byte aligned.
#include "libswscale/swscale.h"
#include "libavutil/imgutils.h"
#include "libavutil/pixelutils.h"
#include "libavutil/pixfmt.h"
#include "libavutil/pixdesc.h"
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
/// Set up a 1280x720 window, and an item with 1/3 width and height of the window.
int window_width, window_height, item_width, item_height;
window_width = 1280;
window_height = 720;
item_width = (window_width / 3);
item_height = (window_height / 3);
int item_out_width = item_width;
/// This line sets the item width to be 32 byte aligned uncomment to see uncorrupted results
/// Note %16 because outformat is 2 bytes per component
//item_out_width -= (item_width % 16);
enum AVPixelFormat outformat = AV_PIX_FMT_YUV422P10LE;
enum AVPixelFormat informat = AV_PIX_FMT_UYVY422;
int window_lines[4] = {0};
av_image_fill_linesizes(window_lines, outformat, window_width);
uint8_t *window_planes[4] = {0};
window_planes[0] = calloc(1, window_lines[0] * window_height);
window_planes[1] = calloc(1, window_lines[1] * window_height);
window_planes[2] = calloc(1, window_lines[2] * window_height); /// Fill the window with all 0s, this is green in yuv.
int item_lines[4] = {0};
av_image_fill_linesizes(item_lines, informat, item_width);
uint8_t *item_planes[4] = {0};
item_planes[0] = malloc(item_lines[0] * item_height);
memset(item_planes[0], 100, item_lines[0] * item_height);
struct SwsContext *ctx;
ctx = sws_getContext(item_width, item_height, informat,
item_out_width, item_height, outformat, SWS_FAST_BILINEAR, NULL, NULL, NULL);
/// Check a block in the normal region
printf("Pre scale normal region %d %d %d\n", (int)((uint16_t*)window_planes[0])[0], (int)((uint16_t*)window_planes[1])[0],
(int)((uint16_t*)window_planes[2])[0]);
/// Check a block in the corrupted region (should be all zeros) These values should be out of the converted region
int corrupt_offset_y = (item_out_width + 3) * 2; ///(item_width + 3) * 2 bytes per component Y PLANE
int corrupt_offset_uv = (item_out_width + 3); ///(item_width + 3) * (2 bytes per component rshift 1 for horiz scaling) U and V PLANES
printf("Pre scale corrupted region %d %d %d\n", (int)(*((uint16_t*)(window_planes[0] + corrupt_offset_y))),
(int)(*((uint16_t*)(window_planes[1] + corrupt_offset_uv))), (int)(*((uint16_t*)(window_planes[2] + corrupt_offset_uv))));
sws_scale(ctx, (const uint8_t**)item_planes, item_lines, 0, item_height,window_planes, window_lines);
/// Preform same tests after scaling
printf("Post scale normal region %d %d %d\n", (int)((uint16_t*)window_planes[0])[0], (int)((uint16_t*)window_planes[1])[0],
(int)((uint16_t*)window_planes[2])[0]);
printf("Post scale corrupted region %d %d %d\n", (int)(*((uint16_t*)(window_planes[0] + corrupt_offset_y))),
(int)(*((uint16_t*)(window_planes[1] + corrupt_offset_uv))), (int)(*((uint16_t*)(window_planes[2] + corrupt_offset_uv))));
return 0;
}
Example Output:
//No alignment
Pre scale normal region 0 0 0
Pre scale corrupted region 0 0 0
Post scale normal region 400 400 400
Post scale corrupted region 512 36865 36865
//With alignment
Pre scale normal region 0 0 0
Pre scale corrupted region 0 0 0
Post scale normal region 400 400 400
Post scale corrupted region 0 0 0
I believe sw_scale sees that the output linesize is 32 byte aligned and overwrites the width of 426 putting garbage in the next 22 bytes of data thinking this is simply padding when in my case this is displayable area.
That's actually correct, swscale indeed does that, good analysis. There's two ways to get rid of this:
disable all SIMD code using av_set_cpu_flags_mask(0).
write the re-scaled 426xN image in a temporary buffer and then manually copy the pixels into the unpadded destination plane.
The reason ffmpeg/swscale overwrite the destination is for performance. If you don't care about runtime and want the simplest code, use the first solution. If you do want performance and don't mind slightly more complicated code, use the second solution.
I wrote the following rs code in order to calculate the magnitude and the direction within the same kernel as the sobel gradients.
#pragma version(1)
#pragma rs java_package_name(com.example.xxx)
#pragma rs_fp_relaxed
rs_allocation bmpAllocIn, direction;
int32_t width;
int32_t height;
// Sobel, Magnitude und Direction
float __attribute__((kernel)) sobel_XY(uint32_t x, uint32_t y) {
float sobX=0, sobY=0, magn=0;
// leave a border of 1 pixel
if (x>0 && y>0 && x<(width-1) && y<(height-1)){
uchar4 c11=rsGetElementAt_uchar4(bmpAllocIn, x-1, y-1); uchar4 c12=rsGetElementAt_uchar4(bmpAllocIn, x-1, y);uchar4 c13=rsGetElementAt_uchar4(bmpAllocIn, x-1, y+1);
uchar4 c21=rsGetElementAt_uchar4(bmpAllocIn, x, y-1);uchar4 c23=rsGetElementAt_uchar4(bmpAllocIn, x, y+1);
uchar4 c31=rsGetElementAt_uchar4(bmpAllocIn, x+1, y-1);uchar4 c32=rsGetElementAt_uchar4(bmpAllocIn, x+1, y);uchar4 c33=rsGetElementAt_uchar4(bmpAllocIn, x+1, y+1);
sobX= (float) c11.r-c31.r + 2*(c12.r-c32.r) + c13.r-c33.r;
sobY= (float) c11.r-c13.r + 2*(c21.r-c23.r) + c31.r-c33.r;
float d = atan2(sobY, sobX);
rsSetElementAt_float(direction, d, x, y);
magn= hypot(sobX, sobY);
}
else{
magn=0;
rsSetElementAt_float(direction, 0, x, y);
}
return magn;
}
And the Java part:
float[] gm = new float[width*height]; // gradient magnitude
float[] gd = new float[width*height]; // gradient direction
ScriptC_sobel script;
script=new ScriptC_sobel(rs);
script.set_bmpAllocIn(Allocation.createFromBitmap(rs, bmpGray));
// dirAllocation: reference to the global variable "direction" in rs script. This
// dirAllocation is actually the second output of the kernel. It will be "filled" by
// the rsSetElementAt_float() method that include a reference to the current
// element (x,y) during the passage of the kernel.
Type.Builder TypeDir = new Type.Builder(rs, Element.F32(rs));
TypeDir.setX(width).setY(height);
Allocation dirAllocation = Allocation.createTyped(rs, TypeDir.create());
script.set_direction(dirAllocation);
// outAllocation: the kernel will slide along this global float Variable, which is
// "formally" the output (in principle the roles of the outAllocation (magnitude) and the
// second global variable direction (dirAllocation)could have been switched, the kernel
// just needs at least one in- or out-Allocation to "slide" along.)
Type.Builder TypeOut = new Type.Builder(rs, Element.F32(rs));
TypeOut.setX(width).setY(height);
Allocation outAllocation = Allocation.createTyped(rs, TypeOut.create());
script.forEach_sobel_XY(outAllocation); //start kernel
// here comes the problem
outAllocation.copyTo(gm) ;
dirAllocation.copyTo(gd);
In a nutshell: this code works for my older Galaxy Tab2 (API17) but it creates a crash (Fatal signal 7 (SIGBUS), code 2, fault addr 0x9e6d4000 in tid 6385) with my Galaxy S5 (API 21). The strange thing is that when I use a simpler Kernel that just calculates SobelX or SobelY gradients in the very same way (except the 2nd allocation, here for the direction), it works also on the S5. Thus, the Problem cannot be some compatibility issue. Also, as I said, the kernel itself passes without problems (I can log the Magnitude and direction values) but it struggles with the above .copyTo Statements. As you can see the gm and gd floats have the same dimensions (width*height) as all other allocations used by the kernel. Any idea what the Problem could be? Or is there an alternative, more robust way to do the whole Story?
I have found the implementation of PSNR in OpenCV written in C++, but I am having trouble to implement this in JavaCV.
http://docs.opencv.org/doc/tutorials/highgui/video-input-psnr-ssim/video-input-psnr-ssim.html#image-similarity-psnr-and-ssim
double getPSNR(const Mat& I1, const Mat& I2)
{
Mat s1;
absdiff(I1, I2, s1); // |I1 - I2|
s1.convertTo(s1, CV_32F); // cannot make a square on 8 bits
s1 = s1.mul(s1); // |I1 - I2|^2
Scalar s = sum(s1); // sum elements per channel
double sse = s.val[0] + s.val[1] + s.val[2]; // sum channels
if( sse <= 1e-10) // for small values return zero
return 0;
else
{
double mse =sse /(double)(I1.channels() * I1.total());
double psnr = 10.0*log10((255*255)/mse);
return psnr;
}
}
For example:
What is Mat type? Is it same as MatVector in JavaCV?
how to do absdiff for MatVector?
I can't find the type Scalar.
How to do sum(s1)?
Thanks and Regards,
Jason
In this case Mat is an array of RGB values from your image.
Scalar in this case is a list of 3 numbers.
What absdiff(I1, I2, s1) is saying you take a pixel from your first image(I1) which has color/grayscale/rgba channels ect and subtract it from the pixel in image 2(I2), take the absolute value of the difference and then store it in your allocated Matrix/Array(s1) as the first element. If you had an rgb image you'd get the absolute difference |R1-R2|,|G1-G2|,|B1-B2| and store those 3 values, where 1 is from image one, and 2 is from image 2, doing so for all pixels.
What sum(s1) is saying, in s1 which stores the differences in color from the two images, sum up all the red values, sum up all the blue values, and sum up all the green values, and return a list of 3 numbers representing the totals of each color.
Just replace RGB with YMK or anything else you might be using.
More information about the basic types including Matrix and Scalar can be found in the opencv documentation here: http://opencv.willowgarage.com/documentation/cpp/basic_structures.html and some code can be found near this file and directory: https://github.com/Itseez/opencv/blob/master/modules/core/include/opencv2/core/types_c.h
"The class Mat represents a 2D numerical array that can act as a matrix (and further it’s referred to as a matrix), image, optical flow map etc. It is very similar to CvMat type from earlier versions of OpenCV, and similarly to CvMat , the matrix can be multi-channel, but it also fully supports ROI mechanism, just like IplImage ."
I ran into the same problem and translated the code above into Java with JavaCV. Here is my code:
private static double getPSNR(CvMat I1, CvMat I2) {
CvMat s1 = CvMat.create(I1.rows(), I1.cols(), I1.depth(), I1.nChannels()); //create matrix with same size as I1
cvAbsDiff(I1, I2, s1); // |I1 - I2|
CvMat s1_squared = cvCreateMat(s1.rows(), s1.cols(), CV_32FC3); //convert mat to 32bit and 3 channels
cvMul(s1, s1, s1_squared, 1); // |I1 - I2| ^2
CvScalar scalar = cvSum(s1_squared); // sum elements per channel
double sse = scalar.getVal(0) + scalar.getVal(1) + scalar.getVal(2); // sum channels
double mse = sse / (double) (s1.channels() * s1.total());
double psnr = 10.0 * Math.log10((255*255) / mse);
return psnr;
}
What do 2 & 3 mean in this and how can I change them?
CvMat* rot = cvCreateMat(2,3,CV_32FC1)
When I change these two values I get an openCV GUI error handler.
size of input arguments do not match()
in function cvConvertScale.\cxconvert.cpp(1601)
I want to understand what that means
Update:
The code is:
#include <cv.h>
#include <highgui.h>
int main()
{
CvMat* rot = cvCreateMat(2,3,CV_32FC1);
IplImage *src, *dst;
src=cvLoadImage("doda.jpg");
// make acopy of gray image(src)
dst = cvCloneImage( src );
dst->origin = src->origin;
// make dstof zeros
cvZero( dst );
// Compute rotation matrix
double x=0.0;
// loop to get rotation from 0 to 360 by 4 press on anykey
for(int i=1;i<=5;i++)
{
CvPoint2D32f center = cvPoint2D32f(src->width/2,src->height/2);
double angle = 0+x;
double scale = 0.6;
cv2DRotationMatrix( center, angle, scale, rot );
// Do the transformation
cvWarpAffine( src, dst, rot);
cvNamedWindow( "Affine_Transform", 1 );
cvShowImage( "Affine_Transform", dst );
if (i<=4)
x=x+90.0;
else
x=0.0;
cvWaitKey();
}
cvReleaseImage( &dst );
cvReleaseMat( &rot );
return 0;
}
2 and 3 are the row and column counts of the matrix you're creating.
From Introduction to programming with OpenCV:
Allocate a matrix:
CvMat* cvCreateMat(int rows, int cols, int type);
type: Type of the matrix elements. Specified in form
CV_<bit_depth>(S|U|F)C<number_of_channels>. E.g.: CV_8UC1 means an
8-bit unsigned single-channel matrix, CV_32SC2 means a 32-bit signed
matrix with two channels.
Example:
CvMat* M = cvCreateMat(4,4,CV_32FC1);
Changing them is as simple as substituting different values. But I guess you should already know that.
2 = number of rows and 3 = number of columns in your matrix, rot.
Can you post the entire code? Or maybe tell us what you want to achieve? Are you trying to rotate an image?
Also, I'd recommend upgrading to OpenCV 2.0 which has a C++ interface. With the new version, you can extensively use the Mat class which handles everything (matrices,images,etc.) and makes things much simpler.
You get an error using any other shape than 2x3 because it is then meaningless for opencv when you use rot for rotation.
Take a look at Jacob's answer.
He describes the rotation matrix components in details.
I would like to have something that looks something like this. Two different colors are not nessesary.
(source: sourceforge.net)
I already have the audio data (one sample/millisecond) from a stereo wav in two int arrays, one each for left and right channel. I have made a few attempts but they don't look anywhere near as clear as this, my attempts get to spikey or a compact lump.
Any good suggestions? I'm working in c# but psuedocode is ok.
Assume we have
a function DrawLine(color, x1, y1, x2, y2)
two int arrays with data right[] and left[] of lenght L
data values between 32767 and -32768
If you make any other assumptions just write them down in your answer.
for(i = 0; i < L - 1; i++) {
// What magic goes here?
}
This is how it turned out when I applied the solution Han provided. (only one channel)
alt text http://www.imagechicken.com/uploads/1245877759099921200.jpg
You'll likely have more than 1 sample for each pixel. For each group of samples mapped to a single pixel, you could draw a (vertical) line segment from the minimum value in the sample group to the maximum value. If you zoom in to 1 sample per pixel or less, this doesn't work anymore, and the 'nice' solution would be to display the sinc interpolated values.
Because DrawLine cannot paint a single pixel, there is a small problem when the minimum and maximum are the same. In that case you could copy a single pixel image in the desired position, as in the code below:
double samplesPerPixel = (double)L / _width;
double firstSample = 0;
int endSample = firstSample + L - 1;
for (short pixel = 0; pixel < _width; pixel++)
{
int lastSample = __min(endSample, (int)(firstSample + samplesPerPixel));
double Y = _data[channel][(int)firstSample];
double minY = Y;
double maxY = Y;
for (int sample = (int)firstSample + 1; sample <= lastSample; sample++)
{
Y = _data[channel][sample];
minY = __min(Y, minY);
maxY = __max(Y, maxY);
}
x = pixel + _offsetx;
y1 = Value2Pixel(minY);
y2 = Value2Pixel(maxY);
if (y1 == y2)
{
g->DrawImageUnscaled(bm, x, y1);
}
else
{
g->DrawLine(pen, x, y1, x, y2);
}
firstSample += samplesPerPixel;
}
Note that Value2Pixel scales a sample value to a pixel value (in the y-direction).
You might want to look into the R language for this. I don't have very much experience with it, but it's used largely in statistical analysis/visualization scenarios. I would be surprised if they didn't have some smoothing function to get rid of the extremes like you mentioned.
And you should have no trouble importing your data into it. Not only can you read flat text files, but it's also designed to be easily extensible with C, so there is probably some kind of C# interface as well.