I'm building a quad-copter using an accelerometer and gyro. The form looks like this one:
My pitch is controlled by the front and back motors.
My roll is controlled by the left and right motors.
I used PID and servo libraries.
Here's my pitch stability code:
double Kp=3.15, Ki=4.3, Kd=0.6;
int throttle = 1000;
PID PitchPID(&PitchInput, &PitchOutput, &PitchSetpoint, Kp, Ki, Kd, DIRECT);
PitchPID.SetMode(AUTOMATIC);
backESC.attach(3);
frontESC.attach(5);
// arme the motors
backESC.writeMicroseconds(2000);
frontESC.writeMicroseconds(2000);
delay(5000);
backESC.writeMicroseconds(700);
frontESC.writeMicroseconds(700);
delay(2000);
void loop{
// i have the angle and pitch
int xAngle = GetX();
int Pitch = GetPitch();
PitchInput = (Pitch) ;
PitchSetpoint = 0;
PitchPID.SetOutputLimits(-150,150);
PitchPID.Compute();
backPower = PitchOutput;
frontPower = (PitchOutput*-1);
frontESC.writeMicroseconds(backPower + throttle);
backESC.writeMicroseconds(frontPower + throttle);
}
As you can see, I don't yet have a real algorithm to stabilize the quad.
I would be grateful for some help. I have already tried some samples. It didn't go well. Even help without using my PID library would be appreciated.
Related
I am trying to port the "RayTracing in One Weekend" into metal compute shader. I encounter this strip artifacts in my project:
Is it because my random generator does not work well?
Does anyone have a clue?
// 参照 https://www.pcg-random.org/ 实现
typedef struct { uint64_t state; uint64_t inc; } pcg32_random_t;
uint32_t pcg32_random_r(thread pcg32_random_t* rng)
{
uint64_t oldstate = rng->state;
rng->state = oldstate * 6364136223846793005ULL + rng->inc;
uint32_t xorshifted = ((oldstate >> 18u) ^ oldstate) >> 27u;
uint32_t rot = oldstate >> 59u;
return (xorshifted >> rot) | (xorshifted << ((-rot) & 31));
}
void pcg32_srandom_r(thread pcg32_random_t* rng, uint64_t initstate, uint64_t initseq)
{
rng->state = 0U;
rng->inc = (initseq << 1u) | 1u;
pcg32_random_r(rng);
rng->state += initstate;
pcg32_random_r(rng);
}
// 生成0~1之间的浮点数
float randomF(thread pcg32_random_t* rng)
{
//return pcg32_random_r(rng)/float(UINT_MAX);
return ldexp(float(pcg32_random_r(rng)), -32);
}
// 生成x_min ~ x_max之间的浮点数
float randomRange(thread pcg32_random_t* rng, float x_min, float x_max){
return randomF(rng) * (x_max - x_min) + x_min;
}
I found this link.It says that the primary ray hit point is either above or below the sphere's surface a litter bit due to the float precision error. It is a z-fighting problem
Seeing these "circles aroune the image center" artifact is almost always a dead give-away for z-fighting (rays along any such "circle" always have the same distance to any flat object you're looking at, so they either all round up or all round down, giving you this artifact).
This z-fighting then translates to this artifact because sometimes all the rays on such a circle are inside the sphere (meaning their shadow rays get self-occluded by the sphere), or all outside (they do what they should). What you want to do is offset the ray origin of the shadow rays a tiny bit (say, 1e-3f) along the normal direction at the hitpoint.
You may also want to read up on Carsten Waechter's article in Ray Tracing Gems 1 - it's for triangles, but explains the problem and potential solution very well.
I wrote a sample code to measure the brightness of LED by controlling the duty cycle of LED connected to Arduino. I want to get the range of least bright light to max bright light for a specific period of period. When i put desired_brightness = 1, the LED is emitting light at 93 lux units, its not the least brightlight. Any suggestion on how to get the least bright light?
int led = 3; // the pin that the LED is attached to
int brightness =0; // how bright the LED is
int incrementfactor = 10; // how many points to fade the LED by
int desired_brightness = 255 ;
int extra_delay = 1000;
void setup() { // declare pin 9 to be an output:
pinMode(led, OUTPUT);
analogWrite(led, desired_brightness);
}
void loop() {
analogWrite(led, desired_brightness);
brightness=brightness+incrementfactor;
if (brightness==desired_brightness) {
delay(extra_delay);
}
}
I've tailored your code a bit. The main problem was you were going to maximum brightness right away, and were never decreasing it. analogWrite() only takes values from 0 to 255. You were starting at 255 and increasing from there, so it just stayed bright. Try this instead, it's the "breathing effect" you see on so many electronics these days, and loops forever:
int ledPin = 3; // the pin that the LED is attached to
int brightness =0; // how bright the LED is
int extra_delay = 1000; // the extra delay at max
int super_delay = 5000; // super delay at min
int direction = 1; // the dimmer-brighter direction
void setup() {
pinMode(ledPin, OUTPUT);
analogWrite(ledPin, 0);
}
void loop() {
analogWrite(ledPin, brightness); // light at certain brightness
//delay(5); // wait a bit, stay at this level so we can see the effect!
delay(50); // longer delay means much slower change from bright to dim
if (direction == 1) // determine whether to go brighter or dimmer
brightness += 1;
else
brightness -= 1;
if (brightness == 255) // switch direction for next time
{
direction = -1;
delay(extra_delay); // extra delay at maximum brightness
}
if (brightness == 0) // switch direction, go brighter next time
{
direction = 1;
delay(super_delay); // super long delay at minimum brightness
}
}
This will go brighter, then dim, and repeat. The delay is very important -- you can reduce it or lengthen it, but without it, the change happens so fast you can't see it by eye, only on an oscilloscope. Hope this helps you!
EDIT:
Added a "super delay" at minimum brightness for measurement of that level.
One thing to keep in mind is that pulse-width modulation like this still gives full driving voltage from the output pin. PWM just alters the ratio of the time the pin is high vs. low. On this Aduino, it's still a voltage swinging instantly and quite rapidly between 0V and 3.3V. To get a true analog voltage from this output, one needs some circuitry to filter the highs and lows of PWM into a smoother average DC voltage. If you want to pursue that, search for "RC low-pass filter" or visit a site like this one.
The code is supposed to fade and copy the window's image to a buffer f, then draw f back onto the window but translated, rotated, and scaled. I am trying to create an effect like a feedback loop when you point a camera plugged into a TV at the TV.
I have tried everything I can think of, logged every variable I could think of, and still it just seems like image(f,0,0) is doing something wrong or unexpected.
What am I missing?
Pic of double image mirror about x-axis:
PGraphics f;
int rect_size;
int midX;
int midY;
void setup(){
size(1000, 1000, P2D);
f = createGraphics(width, height, P2D);
midX = width/2;
midY = height/2;
rect_size = 300;
imageMode(CENTER);
rectMode(CENTER);
smooth();
background(0,0,0);
fill(0,0);
stroke(255,255);
}
void draw(){
fade_and_copy_pixels(f); //fades window pixels and then copies pixels to f
background(0,0,0);//without this the corners dont get repainted.
//transform display window (instead of f)
pushMatrix();
float scaling = 0.90; // x>1 makes image bigger
float rot = 5; //angle in degrees
translate(midX,midY); //makes it so rotations are always around the center
rotate(radians(rot));
scale(scaling);
imageMode(CENTER);
image(f,0,0); //weird double image must have something not working around here
popMatrix();//returns window matrix to normal
int x = mouseX;
int y = mouseY;
rectMode(CENTER);
rect(x,y,rect_size,rect_size);
}
//fades window pixels and then copies pixels to f
void fade_and_copy_pixels(PGraphics f){
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
for (int i = 0; i < pixels.length; i++) {
//////////////FADE PIXELS in window and COPY to f:///////////////
color p = pixels[i];
//get color values, mask then shift
int r = (p & 0x00FF0000) >> 16;
int g = (p & 0x0000FF00) >> 8;
int b = p & 0x000000FF; //no need for shifting
// reduce value for each color proportional
// between fade_amount between 0-1 for 0 being totallty transparent, and 1 totally none
// min is 0.0039 (when using floor function and 255 as molorModes for colors)
float fade_percent= 0.005; //0.05 = 5%
int r_new = floor(float(r) - (float(r) * fade_percent));
int g_new = floor(float(g) - (float(g) * fade_percent));
int b_new = floor(float(b) - (float(b) * fade_percent));
//maybe later rewrite in a way to save what the difference is and round it differently, like maybe faster at first and slow later,
//round doesn't work because it never first subtracts one to get the ball rolling
//floor has a minimum of always subtracting 1 from each value each time. cant just subtract 1 ever n loops
//keep a list of all the pixel as floats? too much memory?
//ill stick with floor for now
// the lowest percent that will make a difference with floor is 0.0039?... because thats slightly more than 1/255
//shift back and or together
p = 0xFF000000 | (r_new << 16) | (g_new << 8) | b_new; // or-ing all the new hex together back into AARRGGBB
f.pixels[i] = p;
////////pixels now copied
}
f.updatePixels();
}
This is a weird one. But let's start with a simpler MCVE that isolates the problem:
PGraphics f;
void setup() {
size(500, 500, P2D);
f = createGraphics(width, height, P2D);
}
void draw() {
background(0);
rect(mouseX, mouseY, 100, 100);
copyPixels(f);
image(f, 0, 0);
}
void copyPixels(PGraphics f) {
loadPixels();
f.loadPixels();
for (int i = 0; i < pixels.length; i++) {
color p = pixels[i];
f.pixels[i] = p;
}
f.updatePixels();
}
This code exhibits the same problem as your code, without any of the extra logic. I would expect this code to show a rectangle wherever the mouse is, but instead it shows a rectangle at a position reflected over the X axis. If the mouse is on the top of the window, the rectangle is at the bottom of the window, and vice-versa.
I think this is caused by the P2D renderer being OpenGL, which has an inversed Y axis (0 is at the bottom instead of the top). So it seems like when you copy the pixels over, it's going from screen space to OpenGL space... or something. That definitely seems buggy though.
For now, there are two things that seem to fix the problem. First, you could just use the default renderer instead of P2D. That seems to fix the problem.
Or you could get rid of the for loop inside the copyPixels() function and just do f.pixels = pixels; for now. That also seems to fix the problem, but again it feels pretty buggy.
If somebody else (paging George) doesn't come along with a better explanation by tomorrow, I'd file a bug on Processing's GitHub. (I can do that for you if you want.)
Edit: I've filed an issue here, so hopefully we'll hear back from a developer in the next few days.
Edit Two: Looks like a fix has been implemented and should be available in the next release of Processing. If you need it now, you can always build Processing from source.
An easier one, and works like a charm:
add f.beginDraw(); before and f.endDraw(); after using f:
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
f.beginDraw();
and
f.updatePixels();
f.endDraw();
Processing must know when it's drawing in a buffer and when not.
In this image you can see that works
Using the capture video in processing, I want to understand how to set up a small section of the camera feed that the camera will constantly scan. Within that defined section, I want the camera to look for a change in brightness (i.e the brightness now becomes dark.) If the brightness changes I just want it to return 'shadow detected.' Can anyone help me get started? I am very new to this language.
You can easily get a small section of the camera(or any image) using PImage's get() method to which you pass a bunch of coordinates describing of your section rectangle(x,y, width, height).
This is also known as a region of interest (ROI) in computer vision.
Once you retrieve this region, you can process it.
Here a minimal example showing how to get the ROI and process it (in this case simply doing threshold based on the mouse position:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
void setup(){
size(w,h);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
//display output
image(roi,roiX,roiY);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
You can get the brightness of a pixel using the brightness() function.
This means you can get the average brightness of your ROI by adding the brightness levels for each pixels, then dividing the result by the total number of pixels:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
void setup(){
size(w,h);fill(127);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
//display output
image(roi,roiX,roiY);
text("ROI brightness:"+brightness(roi),10,15);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
float brightness(PImage in){
float brightness = 0.0;
int numPixels = in.pixels.length;
for(int i = 0 ; i < numPixels; i++) brightness += brightness(in.pixels[i]);
return brightness/numPixels;
}
If you've set your ROI to cover the bright area, you should see the average brightness go down as the shadow appears. Simply using a threshold value in a condition should allow to act on it:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
float brightness = 0.0;
float shadowThresh = 127.0;
void setup(){
size(w,h);fill(127);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
brightness = brightness(roi);
if(brightness < shadowThresh) println("shadow detected");
//display output
image(roi,roiX,roiY);
text("ROI brightness:"+brightness,10,15);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
float brightness(PImage in){
float brightness = 0.0;
int numPixels = in.pixels.length;
for(int i = 0 ; i < numPixels; i++) brightness += brightness(in.pixels[i]);
return brightness/numPixels;
}
Hopefully these examples are easy to read and understand.
Note that this aren't as fast as they can be.
Be sure to also check out the video examples that come with Processing (Examples > Libraries > video > Capture), especially these: BrightnessThresholding,BrightnessTracking
If you want to learn more about techniques like these you should look into computer vision and the OpenCV library. There is a very nice OpenCV Processing library which you can now easily install via Sketch > Import Library... > Add Library... and select OpenCV for Processing. It also comes with examples on using brightness.
This covers the pixel manipulation side, but another important aspect of doing this sort of development is setup. It's crucial to have a reliable setup: it will make your life easier. What I mean by that is, in your case:
having control over the camera: being able to control auto white balance/brightness/etc. as automatic adjustments may throw off your values.
having control over the scene: making sure you reduce of risks of accidental lights messing with your tracking, or something bumping over the camera or object you're tracking.
Assuming the camera data you are analysing is a PImage, you can apply filters to the data to get it into a Black/White or grey scale form. The docs on PImage Filter modes: https://processing.org/reference/filter_.html should be useful.
You will probably have to do a pixel analysis - there may be a library to help here, but you can get an array of pixels from the filtered PImage, loop through it and check the values against your baseline values to see if they are brighter or darker. If they are grey scale in the 0 - 255 scale, you can tell if they are lighter if the number if higher than the baseline, or darker of the number is lower.
I am doing several steps of reprojections of a point cloud (around 40 Million points initially, ~20 Million while processing). The Programm crashes at seemingly random points at one of these 2 loops. If I run it with a smaller subset (~10 Million Points) everything works fine.
//Projection of Point Cloud into a sphere
pcl::PointCloud<pcl::PointXYZ>::Ptr projSphere(pcl::PointCloud<pcl::PointXYZ>::Ptr cloud,int radius)
{
//output cloud
pcl::PointCloud<pcl::PointXYZ>::Ptr output(new pcl::PointCloud<pcl::PointXYZ>);
//time marker
int startTime = time(NULL);
cout<<"Start Sphere Projection"<<endl;
//factor by which each Point Vector ist multiplied to get a distance of radius to the origin
float scalar;
for (int i=0;i<cloud->size();i++)
{
if (i%1000000==0) cout<<i<<endl;
//P
pcl::PointXYZ tmpin=cloud->points.at(i);
//P'
pcl::PointXYZ tmpout;
scalar=radius/(sqrt(pow(tmpin.x,2)+pow(tmpin.y,2)+pow(tmpin.z,2)));
tmpout.x=tmpin.x*scalar;
tmpout.y=tmpin.y*scalar;
tmpout.z=tmpin.z*scalar;
//Adding P' to the output cloud
output->push_back(tmpout);
}
cout<<"Finished projection of "<<output->size()<<" points in "<<time(NULL)-startTime<<" seconds"<<endl;
return(output);
}
//Stereographic Projection
pcl::PointCloud<pcl::PointXYZ>::Ptr projStereo(pcl::PointCloud<pcl::PointXYZ>::Ptr cloud)
{
//output cloud
pcl::PointCloud<pcl::PointXYZ>::Ptr outputSt(new pcl::PointCloud<pcl::PointXYZ>);
//time marker
int startTime = time(NULL);
cout<<"Start Stereographic Projection"<<endl;
for (int i=0;i<cloud->size();i++)
{
//P
if (i%1000000==0) cout<<i<<endl;
pcl::PointXYZ tmpin=cloud->points.at(i);
//P'
pcl::PointXYZ tmpout;
//equation
tmpout.x=tmpin.x/(1.0+tmpin.z);
tmpout.y=tmpin.y/(1.0+tmpin.z);
tmpout.z=0;
//Adding P' to the output cloud
outputSt->push_back(tmpout);
}
cout<<"Finished projection of"<<outputSt->size()<<" points in "<<time(NULL)-startTime<<" seconds"<<endl;
return(outputSt);
}
If I do all the steps independently by saving/loading the pointclouds on the harddisk and rerunning the program for each step it also works fine. I'd like to provie the entire source files but I'm not sure how/if it's neccessary.
Thanks in advance
Edit:1
After about a week I have still no idea what might be the issue here, since the crashes are somewhat random, but not really? I tried to test the programm with a different system workload (freshly rebooted, with heavy duty programs loaded etc.) makes no apparent difference. Since I thought it's maybe a memory issue, I tried o move the large objects from stack to heap (initialising them with new), did also make no difference. By far the largest object is the raw input file, which I open and close by:
ifstream file;
file.open(infile);
/*......*/
file.close();
delete file;
Is that properly done, so that after the method is completed the memory is released?
Edit again:
So I try further and further, and finally I managed to put all the steps into one function like this:
void stereoTiffI(string infile, string outfile, int length)
{
//set up file input
cout<<"Opening file: "<< infile<<endl;
ifstream file;
file.open(infile);
string line;
//skip first lines
for (int i=0;i<9;i++)
{
getline(file,line);
}
//output cloud
pcl::PointCloud<pcl::PointXYZ> cloud;
getline(file,line);
//indexes for string parsing, coordinates and starting Timer
int i=0;
int j=0;
int k=0;
float x=0;
float y=0;
float z=0;
float intensity=0;
float scalar=0;
int startTime = time(NULL);
pcl::PointXYZ tmp;
//begin loop
cout<<"Begin reading and projecting"<< infile<<endl;
while (!file.eof())
{
getline(file,line);
i=0;
j=line.find(" ");
x=atof(line.substr(i,j).c_str());
i=line.find(" ",i)+1;
j=line.find(" ",i)-i;
y=atof(line.substr(i,j).c_str());
i=line.find(" ",i)+1;
j=line.find(" ",i)-i;
z=atof(line.substr(i,j).c_str());
//i=line.find(" ",i)+1;
//j=line.find(" ",i)-i;
//intensity=atof(line.substr(i,j).c_str());
//leave out points below scanner height
if (z>0)
{
//projection to a hemisphere with radius 1
scalar=1/(sqrt(pow(x,2)+pow(y,2)+pow(z,2)));
x=x*scalar;
y=y*scalar;
z=z*scalar;
//stereographic projection
x=x/(1.0+z);
y=y/(1.0+z);
z=0;
tmp.x=x;
tmp.y=y;
tmp.z=z;
//tmp.intensity=intensity;
cloud.push_back(tmp);
k++;
if (k%1000000==0)cout<<k<<endl;
}
}
cout<<"Finished producing projected cloud in: "<<time(NULL)-startTime<<" with "<<cloud.size()<<" points."<<endl;
And this actually works quit nicely and quickly. In a next step I tried to use Pointtype XYZI because I need to also get the intensity of the scanned points. And guess what, the program crashes at around 17000000 again, and again I have no idea why. Please help
Ok, I solved it. Dr. Memory gave me the right hint by giving me a heap allocation error. After a bit of googling I enabled Large Addresses in Visual Studio (Properties -> Linker -> System)
Everything works like a charm.