Allegro 5 problems - allegro

The code below won't work; it produces a blank screen. But if I change the filled rectangle line toward the bottom line to:
al_draw_filled_rectangle(100, 100, 100+15, 100+15, al_map_rgb(155, 255, 155));
It produces a square at the correct coordinates. What's up?
#define ALLEGRO_STATICLINK
#include <allegro5/allegro.h>
#include <allegro5/allegro_primitives.h>
int main(int argc, char **argv)
{
ALLEGRO_DISPLAY *display;
if(!al_init())
{
return -1;
}
display = al_create_display(640, 480);
if(!display)
{
return -1;
}
if(!al_init_primitives_addon())
{
return -1;
}
al_draw_filled_rectangle(73, 493, 73+15, 493+15, al_map_rgb(155, 255, 155));
al_flip_display();
al_rest(10);
return 0;
}

You are trying to draw at a Y coordinate greater than the screen height...
al_draw_filled_rectangle(73, 493, 73+15, 493+15, al_map_rgb(155, 255, 155));
Draws at 493 to 493+15
493 > 480 and 493+15 > 480
display = al_create_display(640, 480);
This set 480 as your screen height so drawing above that number will result in nothing being shown.
When you use
al_draw_filled_rectangle(100, 100, 100+15, 100+15, al_map_rgb(155, 255, 155));
You are now actually on the screen so it works.

Related

Is it possible to split a line as high as the whole screen (from 0 to height) in a line as high as half the screen in Processing?

I need some help with a code for an exam in my university.
What I'm trying to do here is a visual representation of a speech between two people. So the code starts when you press "L" and then works a bit like walkie talkie so when the other person speaks need to push "A", when the word goes back to the first person he needs to press "L" again and so on.
I like the result of the code so far but my professor told me to try something and I'm not able to do it.
He would like to see the coloured lines covering all the screen in vertical and not just a portion of it and when they reach the end of the screen on the right split in two so that the first row that just got created becomes half of the screen and the new one creating in the other half. When the second row finishes and the third row is created the screen must split in 3 and so on.
I tried to achieve this but I messed up the code so I will post here the last version of it working.
I hope you can help in any way, all kind of suggestion are appreciated, thank you!
import processing.sound.*;
AudioIn input;
Amplitude amp;
int y;
int x;
int incY = 141;
color bg = color(255, 0);
color high;
color low;
color mid;
void setup() {
size(1440, 846);
background(bg);
pixelDensity(displayDensity());
input = new AudioIn(this, 0);
input.start();
amp = new Amplitude(this);
amp.input(input);
}
void draw() {
textSize(40);
fill(0);
float volume = amp.analyze();
int lncolor = int(map(volume, 0, 0.05, 0, 3));
noFill();
strokeCap(SQUARE);
//strokeWeight(10);
if (lncolor==0) {
stroke(bg);
}
if (lncolor==1) {
stroke(low);
}
if (lncolor==2) {
stroke(mid);
}
if (lncolor==3) {
stroke(high);
}
if (key == 'a') {
x++;
if (x==width) {
x = 0;
y = y + incY;
}
line(x, y, x, y+incY);
high=color(72, 16, 255);
low= color(179, 155, 255);
mid = lerpColor(low, high, .5);
}
if (key == 'l') {
x++;
if (x==width) {
x = 0;
y = y + incY;
}
line(x, y, x, y+incY);
high=color(255, 128, 16);
low= color(255, 203, 156);
mid = lerpColor(low, high, .5);
}
}
The following code won't solve all of your problems, but does show how to keep splitting the screen into proportionate rectangles based on the number of times the graph exceeds the width of the screen. The advancing green bar at the top is where your current signal would be plotted. I'll leave it to you to figure out how to get all the old signal into its respective rectangle. I was unable to run the code that you posted; error message was "Audio Input not configured in start() method". All that I saw was a blank screen.
int x = 0;
int counter = 1;
void rectGrid(int t, int w, int h) {
int top;
for (int k = 0; k < counter; k++) {
top = t + k*h;
stroke(0);
strokeWeight(1);
fill(random(255));
rect( 0, top, w, h);
}
}
void setup() {
size(400, 400);
background(209);
}
void draw() {
fill(0, 255, 0);
rect(0, 0, x++, height/counter);
if (x == width) {
counter++;
println("count = ", counter + " : " + "height = ", height/counter);
x = 0;
background(209);
rectGrid(0, width, height/counter);
}
}

How to manually apply matrix stack to coordinates in processing

In processing, when you apply a matrix transformation, you can draw on your canvas without worrying of the "true" position of your x y coordinate.
I thought that by the same logic, I could copy a section of the canvas by using ParentApplet.get(x, y, width, height) and that it would automatically shift the x and y, but it does not, it uses the coordinates as raw inputs without applying the matrix stack to it.
So the easiest way I see to deal with the problem would be to manually apply the matrix stack to my x, y, width, height values and using the results as input of get(). But I cannot find such a function, does one exist ?
EDIT : As requested, Here's an example of my problem
So the objective here is to draw a simple shape, copy it and paste it. Without translate, there is no problem:
void settings(){
size(500, 500);
}
void draw() {
background(255);
// Fancy rectangle for visibility
fill(255, 0 ,0);
rect(0, 0, 100, 100);
fill(0, 255, 0);
rect(20, 20, 60, 60);
// copy rectangle and paste it elsewhere
PImage img = get(0, 0, 101, 101);
image(img, 200, 200);
}
Now if I applied a translate matrix before drawing the shape, I wish that I could use the same get() code to copy the exact same drawing:
void settings(){
size(500, 500);
}
void draw() {
background(255);
pushMatrix();
translate(10, 10);
// Fancy rectangle for visibility
fill(255, 0 ,0);
rect(0, 0, 100, 100);
fill(0, 255, 0);
rect(20, 20, 60, 60);
// copy rectangle and paste it elsewhere
PImage img = get(0, 0, 101, 101);
image(img, 200, 200);
popMatrix();
}
But it doesn't work that way, The get(0, 0, ..) doesn't use the current transformation matrix to copy pixels from origin (10, 10):
Can you please provide a few more details.
It is possible to manipulate coordinate systems using pushMatrix()/PopMatrix() and you can go further and manually multiply matrices and vectors.
The part that is confusing is that you're calling get(x,y,width,height) but no showing how you render the PImage section. It's hard to guess the matrix stack you're mentioning. Can you post an example snippet ?
If you render it at the same x,y you call get() with it should render with the same x,y shift:
size(640, 360);
noFill();
strokeWeight(9);
PImage placeholderForPGraphics = loadImage("https://processing.org/examples/moonwalk.jpg");
image(placeholderForPGraphics, 0, 0);
int x = 420;
int y = 120;
int w = 32;
int h = 48;
// visualise region of interest
rect(x, y, w, h);
// grab the section sub PImage
PImage section = placeholderForPGraphics.get(x, y, w, h);
//filter the section to make it really standout
section.filter(THRESHOLD);
// display section at same location
image(section, x, y);
Regarding the matrix stack, you can call getMatrix() which will return a PMatrix2D if you're in 2D mode (otherwise a PMatrix3D). This is a copy of the current matrix stack at the state you've called it (any prior operations will be "baked" into this one).
For example:
PMatrix m = g.getMatrix();
printArray(m.get(new float[]{}));
(g.printMatrix() should be easier to print to console, but you need to call getMatrix() if you need an instance to manipulate)
Where g is your PGraphics instance.
You can then manipulate it as you like:
m.translate(10, 20);
m.rotate(radians(30));
m.scale(1.5);
Remember to call applyMatrix() it when you're done:
g.applyMatrix(m);
Trivial as it may be I hope this modified version of the above example illustrates the idea:
size(640, 360);
noFill();
strokeWeight(9);
// get the current transformation matrix
PMatrix m = g.getMatrix();
// print to console
println("before");
g.printMatrix();
// modify it
m.translate(160, 90);
m.scale(0.5);
// apply it
g.applyMatrix(m);
// print applied matrix
println("after");
g.printMatrix();
PImage placeholderForPGraphics = loadImage("https://processing.org/examples/moonwalk.jpg");
image(placeholderForPGraphics, 0, 0);
int x = 420;
int y = 120;
int w = 32;
int h = 48;
// visualise region of interest
rect(x, y, w, h);
// grab the section sub PImage
PImage section = placeholderForPGraphics.get(x, y, w, h);
//filter the section to make it really standout
section.filter(THRESHOLD);
// display section at same location
image(section, x, y);
Here's another example making a basic into PGraphics using matrix transformations:
void setup(){
size(360, 360);
// draw something manipulating the coordinate system
PGraphics pg = createGraphics(360, 360);
pg.beginDraw();
pg.background(0);
pg.noFill();
pg.stroke(255, 128);
pg.strokeWeight(4.5);
pg.rectMode(CENTER);
pg.translate(180,180);
for(int i = 0 ; i < 72; i++){
pg.rotate(radians(5));
pg.scale(0.95);
//pg.rect(0, 0, 320, 320, 32, 32, 32, 32);
polygon(6, 180, pg);
}
pg.endDraw();
// render PGraphics
image(pg, 0, 0);
}
This is overkill: the same effect could have been drawn much simpler, however the focus in on calling get() and using transformation matrices. Here a modified iteration showing the same principle with get(x,y,w,h), then image(section,x,y):
void setup(){
size(360, 360);
// draw something manipulating the coordinate system
PGraphics pg = createGraphics(360, 360);
pg.beginDraw();
pg.background(0);
pg.noFill();
pg.stroke(255, 128);
pg.strokeWeight(4.5);
pg.rectMode(CENTER);
pg.translate(180,180);
for(int i = 0 ; i < 72; i++){
pg.rotate(radians(5));
pg.scale(0.95);
//pg.rect(0, 0, 320, 320, 32, 32, 32, 32);
polygon(6, 180, pg);
}
pg.endDraw();
// render PGraphics
image(pg, 0, 0);
// take a section of PGraphics instance
int w = 180;
int h = 180;
int x = (pg.width - w) / 2;
int y = (pg.height - h) / 2;
PImage section = pg.get(x, y, w, h);
// filter section to emphasise
section.filter(INVERT);
// render section at sampled location
image(section, x, y);
popMatrix();
}
void polygon(int sides, float radius, PGraphics pg){
float angleIncrement = TWO_PI / sides;
pg.beginShape();
for(int i = 0 ; i <= sides; i++){
float angle = (angleIncrement * i) + HALF_PI;
pg.vertex(cos(angle) * radius, sin(angle) * radius);
}
pg.endShape();
}
Here's a final iteration re-applying the last transformation matrix in an isolated coordinate space (using push/pop matrix calls):
void setup(){
size(360, 360);
// draw something manipulating the coordinate system
PGraphics pg = createGraphics(360, 360);
pg.beginDraw();
pg.background(0);
pg.noFill();
pg.stroke(255, 128);
pg.strokeWeight(4.5);
pg.rectMode(CENTER);
pg.translate(180,180);
for(int i = 0 ; i < 72; i++){
pg.rotate(radians(5));
pg.scale(0.95);
//pg.rect(0, 0, 320, 320, 32, 32, 32, 32);
polygon(6, 180, pg);
}
pg.endDraw();
// render PGraphics
image(pg, 0, 0);
// take a section of PGraphics instance
int w = 180;
int h = 180;
int x = (pg.width - w) / 2;
int y = (pg.height - h) / 2;
PImage section = pg.get(x, y, w, h);
// filter section to emphasise
section.filter(INVERT);
// print last state of the transformation matrix
pg.printMatrix();
// get the last matrix state
PMatrix m = pg.getMatrix();
// isolate coordinate space
pushMatrix();
//apply last PGraphics matrix
applyMatrix(m);
// render section at sampled location
image(section, x, y);
popMatrix();
save("state3.png");
}
void polygon(int sides, float radius, PGraphics pg){
float angleIncrement = TWO_PI / sides;
pg.beginShape();
for(int i = 0 ; i <= sides; i++){
float angle = (angleIncrement * i) + HALF_PI;
pg.vertex(cos(angle) * radius, sin(angle) * radius);
}
pg.endShape();
}
This is an extreme example, as 0.95 downscale is applied 72 times, hence a very small image is rendered. Also notice the rotation is incremented.
Update Based on your update snippet it seems the confusion is around pushMatrix() and get().
In your scenario, pushMatrix()/translate() will offset the local coordinate sytem: that is where elements are drawn.
get() is called globally and uses absolute coordinates.
If you're only using translation, you can simply store the translation coordinates and re-use them to sample from the same location:
int sampleX = 10;
int sampleY = 10;
void settings(){
size(500, 500);
}
void draw() {
background(255);
pushMatrix();
translate(sampleX, sampleY);
// Fancy rectangle for visibility
fill(255, 0 ,0);
rect(0, 0, 100, 100);
fill(0, 255, 0);
rect(20, 20, 60, 60);
// copy rectangle and paste it elsewhere
PImage img = get(sampleX, sampleY, 101, 101);
image(img, 200, 200);
popMatrix();
}
Update
Here are a couple more examples on how to compute, rather than hard code the translation value:
void settings(){
size(500, 500);
}
void setup() {
background(255);
pushMatrix();
translate(10, 10);
// Fancy rectangle for visibility
fill(255, 0 ,0);
rect(0, 0, 100, 100);
fill(0, 255, 0);
rect(20, 20, 60, 60);
// local to global coordinate conversion using PMatrix
// g is the global PGraphics instance every PApplet (sketch) uses
PMatrix m = g.getMatrix();
printArray(m.get(null));
// the point in local coordinate system
PVector local = new PVector(0,0);
// multiply local point by transformation matrix to get global point
// we pass in null to get a new PVector instance: you can make this more efficient by allocating a single PVector ad re-using it instead of this basic demo
PVector global = m.mult(local,null);
// copy rectangle and paste it elsewhere
println("local",local,"->global",global);
PImage img = get((int)global.x, (int)global.y, 101, 101);
image(img, 200, 200);
popMatrix();
}
To calculate the position of a vector based on a transformation matrix, simply multiply the vector by that matrix. Very roughly speaking what's what happens with push/pop matrix (a transformation matrix is used for each push/pop stack, which is then multiplied all the way up the global coordinate system). (Notice the comment on efficienty/pre-allocating matrices and vectors as well).
This will be more verbose in terms of code and may need a bit of planning if you're using a lot of nested transformations, however you have finer control of which transformations you choose to use.
A simpler solution may be to switch to the P3D OpenGL renderer which allows you use screenX(), screenY() to do this conversion. (Also checkout modelX()/modelY())
void settings(){
size(500, 500, P3D);
}
void draw() {
background(255);
pushMatrix();
translate(10, 10);
// Fancy rectangle for visibility
fill(255, 0 ,0);
rect(0, 0, 100, 100);
fill(0, 255, 0);
rect(20, 20, 60, 60);
// local to global coordinate conversion using modelX,modelY
float x = screenX(0, 0, 0);
float y = screenY(0, 0, 0);
println(x,y);
PImage img = get((int)x, (int)y, 101, 101);
image(img, 200, 200);
popMatrix();
}
Bare in mind that you want to grab a rectangle which simply has translation applied. Since get() won't take rotation/scale into account, for more complex cases you may want to convert local to global coordinates of not just the top left point, but also the bottom right one with an offset. The idea is to compute the larger bounding box (with no rotation) around the transformed box so when you call get() the whole area of interest is returned (not just a clipped section).

changing alpha value in SDL_SetRenderDrawColor doesn't effect anything. SDL2

This code shows just a simple window with a color:
#include<SDL.h>
SDL_Window* g_pWindow = 0;
SDL_Renderer* g_pRenderer = 0;
int main(int argc, char* args[])
{
if (SDL_Init(SDL_INIT_EVERYTHING) >= 0)
{
g_pWindow = SDL_CreateWindow("Chapter 1: Setting up SDL",
SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
640, 480,
SDL_WINDOW_SHOWN);
if (g_pWindow != 0)
{
g_pRenderer = SDL_CreateRenderer(g_pWindow, -1, 0);
}
}
else
{
return 1; // sdl could not initialize
}
SDL_SetRenderDrawColor(g_pRenderer, 80, 80, 253, 0);
// clear the window to black
SDL_RenderClear(g_pRenderer);
// show the window
SDL_RenderPresent(g_pRenderer);
// set a delay before quitting
SDL_Delay(2000);
// clean up SDL
SDL_Quit();
return 0;
}
I'm testing to see what happens when I change the alpha factor in SDL_SetRenderDrawColor(g_pRenderer, 80, 80, 253, 0). When I change the alpha value from 0 to 255 it doesn't effect anything.
What's the problem here?
First of all, you haven't enabled blending (e.g. SDL_SetRenderDrawBlendMode(renderer, SDL_BLENDMODE_BLEND);).
But anyway, it makes no sense for clear operation to use blending, and I bet SDL_RenderClear ignores it.
If you want fullscreen blend, you should draw fullscreen rectangle with SDL_RenderFillRect.
Writing this line will make alpha values have effect on transparency:
SDL_SetRenderDrawBlendMode(renderer, SDL_BLENDMODE_BLEND);
No need to rewrite every time & for every effert, just write once before using transparency
see documentation on more blend modes:
https://wiki.libsdl.org/SDL2/SDL_BlendMode

Changing the Z perspective in processing

I'm fairly new to processing but I am having trouble shifting the z position on one of my line sets.
The x axis lines look as I need it to, but I am basically trying to bring up the Y set of lines so they arent just going downwards but are more linked up with the first set of lines. I hope I'm making sense, It's kind of hard to explain. Thanks!
Edit: Basically what Im trying to make is a tiled floor.
int grid = 80;
void setup() {
size (1024, 900, P3D);
}
void draw() {
int movement = mouseY-500;
background(0);
strokeWeight(2.5);
stroke(100, 255, 0, 60);
//floorx
for (int i = 0; i < width; i+=grid) {
line (i , height/2 , 0, i , height, 5000);
}
//floory
for (int i = 0; i < height; i+=grid) {
line (0, i + height/2, 0, width, i + height/2,0 );
}
}
I think that you want the grid to appear more geometrically correct. To achieve this you need different distances between the horizontal lines.
Try this in your floory part:
grid = 40;
//floory
for (int i = 0; i < height; i+=grid) {
line (0, i + height/2, 0, width, i + height/2, 0 );
grid += 20;
}

accumulatedweight throws cv:Exception error

I am new to OpenCV and trying to find contours and draw rectangle on them, here's my code but its throwing cv::Exception when it comes to accumulatedweighted().
i tried to make both src(Original Image) and dst(background) by converting to CV_32FC3 and then finding avg using accumulatedweighted.
#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <ctype.h>
using namespace cv;
using namespace std;
static void help()
{
cout << "\nThis is a Example to implement CAMSHIFT to detect multiple motion objects.\n";
}
Rect rect;
VideoCapture capture;
Mat currentFrame, currentFrame_grey, differenceImg, oldFrame_grey,background;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
bool first = true;
int main(int argc, char* argv[])
{
//Create a new movie capture object.
capture.open(0);
if(!capture.isOpened())
{
//error in opening the video input
cerr << "Unable to open video file: " /*<< videoFilename*/ << endl;
exit(EXIT_FAILURE);
}
//capture current frame from webcam
capture >> currentFrame;
//Size of the image.
CvSize imgSize;
imgSize.width = currentFrame.size().width; //img.size().width
imgSize.height = currentFrame.size().height; ////img.size().height
//Images to use in the program.
currentFrame_grey.create( imgSize, IPL_DEPTH_8U);//image.create().
while(1)
{
capture >> currentFrame;//VideoCapture& VideoCapture::operator>>(Mat& image)
//Convert the image to grayscale.
cvtColor(currentFrame,currentFrame_grey,CV_RGB2GRAY);//cvtColor()
currentFrame.convertTo(currentFrame,CV_32FC3);
background = Mat::zeros(currentFrame.size(), CV_32FC3);
accumulateWeighted(currentFrame,background,1.0,NULL);
imshow("Background",background);
if(first) //Capturing Background for the first time
{
differenceImg = currentFrame_grey.clone();//img1 = img.clone()
oldFrame_grey = currentFrame_grey.clone();//img2 = img.clone()
convertScaleAbs(currentFrame_grey, oldFrame_grey, 1.0, 0.0);//convertscaleabs()
first = false;
continue;
}
//Minus the current frame from the moving average.
absdiff(oldFrame_grey,currentFrame_grey,differenceImg);//absDiff()
//bluring the differnece image
blur(differenceImg, differenceImg, imgSize);//blur()
//apply threshold to discard small unwanted movements
threshold(differenceImg, differenceImg, 25, 255, CV_THRESH_BINARY);//threshold()
//find contours
findContours(differenceImg,contours,hierarchy,CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0)); //findcontours()
//draw bounding box around each contour
//for(; contours! = 0; contours = contours->h_next)
for(int i = 0; i < contours.size(); i++)
{
rect = boundingRect(contours[i]); //extract bounding box for current contour
//drawing rectangle
rectangle(currentFrame, cvPoint(rect.x, rect.y), cvPoint(rect.x+rect.width, rect.y+rect.height), cvScalar(0, 0, 255, 0), 2, 8, 0);
}
//New Background
convertScaleAbs(currentFrame_grey, oldFrame_grey, 1.0, 0.0);
//display colour image with bounding box
imshow("Output Image", currentFrame);//imshow()
//display threshold image
imshow("Difference image", differenceImg);//imshow()
//clear memory and contours
//cvClearMemStorage( storage );
//contours = 0;
contours.clear();
//background = currentFrame;
//press Esc to exit
char c = cvWaitKey(33);
if( c == 27 ) break;
}
// Destroy All Windows.
destroyAllWindows();
return 0;
}
Please Help to solve this.
you might want to RTFM before asking here.
so, you missed the alpha param as well as the dst Mat in your call to addWeighted
Mat dst;
accumulateWeighted(currentFrame, 0.5 background, 0.5, 0, dst);
also, no idea, what the whole thing should achieve. adding up the current frame before diffing it does not make any sense to me.
if you planned to do background separation, throw it all away, and use one of the builtin backgroundsubtractors instead

Resources