Processing: Performance issues with background() - processing

I'm creating a stereoscopic test application where the scene is rendered into a PGraphics left and a PGraphics right with different camera angles for the two eye points. The two images is then combined into a side-by-side output in the draw() function.
The scene consists of a pre-rendered background, stored in a separate PGraphics, rendered once, and a rotating box() rendered for each frame.
The problem is that the call to gfx.background(gfxBackground); in render() is very CPU intensive. If I replace it with a gfx.background(0); call, the sketch runs smoothly.
My assumption was that blit'ing data from one PGraphics to another would be done with hardware acceleration, but it seems it isn't. Am I doing something wrong?
My sketch:
PGraphics leftBackground;
PGraphics rightBackground;
PGraphics left;
PGraphics right;
int sketchWidth() { return 1920; }
int sketchHeight() { return 1200; }
int sketchQuality() { return 8; }
String sketchRenderer() { return P3D; }
void setup()
{
noCursor();
leftBackground = createGraphics(width / 2, height, P3D);
renderBackground(leftBackground, "L");
rightBackground = createGraphics(width / 2, height, P3D);
renderBackground(rightBackground, "R");
left = createGraphics(width / 2, height, P3D);
left.beginDraw();
left.endDraw();
left.camera(-10, 0, 220,
0, 0, 0,
0, 1, 0);
right = createGraphics(width / 2, height, P3D);
right.beginDraw();
right.endDraw();
right.camera( 10, 0, 220,
0, 0, 0,
0, 1, 0);
}
void draw()
{
render(left, leftBackground);
render(right, rightBackground);
image(left, 0, 0);
image(right, left.width, 0);
}
void renderBackground(PGraphics gfx, String str)
{
gfx.beginDraw();
gfx.background(0);
gfx.stroke(255);
gfx.noFill();
gfx.rect(0, 0, gfx.width, gfx.height);
gfx.scale(0.5, 1.0, 1.0);
gfx.textSize(40);
gfx.fill(255);
gfx.text(str, 30, 40);
gfx.endDraw();
}
void render(PGraphics gfx, PGraphics gfxBackground)
{
gfx.beginDraw();
gfx.background(gfxBackground);
gfx.scale(0.5, 1, 1);
gfx.rotateY((float)frameCount / 100);
gfx.rotateX((float)frameCount / 90);
gfx.stroke(255);
gfx.fill(0);
gfx.box(30);
gfx.endDraw();
}

You've got multiple options to achieve the same visual output.
Here are a few options:
Simply overlay the "L"/"R" text:
in draw():
render(left, bgl);
render(right, bgr);
image(right, 0, 0);
image(right, left.width, 0);
text("L",100,100);
text("R",width/2+100,100);
using gfx.background(0) in render().
PGraphics extends PImage so instead of
gfx.background(gfxBackground);
you can use
gfx.image(gfxBackground,xoffset,yoffset);
You will need to offset because of the camera call, also, you will need to translate the box in Z direction since by default it will be at (0,0,0) and will intersect with the quad rendering the background image.
If you want to go deeper and find other bottlenecks sample the CPU using jvisualvm (if you have the JDK installed and PATH set to it you should be able to run this from terminal/commandline, otherwise there's an application in YOUR_JDK_INSTALL_PATH\bin).
Take a couple snapshots at different intervals and compare performance. You might find some other draw commands that could be changed to gain a few ms per frame.

Related

How do I fill a vertex' shape with an image in processing?

When I use my code it says: No uv text coordinates supplied with vertex() call.
This is the code I use:
PImage img;
void setup() {
size(720, 360, P3D);
}
void draw() {
beginShape();
img = loadImage("image.png");
texture(img);
vertex(50, 20);
vertex(105, 20);
vertex(105, 75);
vertex(50, 75);
endShape();
}
Like your error and George's comment say, to use a texture you need to pass in 4 parameters to the vertex() function instead of 2 parameters.
From the reference:
size(100, 100, P3D);
noStroke();
PImage img = loadImage("laDefense.jpg");
beginShape();
texture(img);
vertex(10, 20, 0, 0);
vertex(80, 5, 100, 0);
vertex(95, 90, 100, 100);
vertex(40, 95, 0, 100);
endShape();
(source: processing.org)
Also note that you should not be loading your image inside the draw() function, because that causes you to load the same image 60 times per second. You should load it once from the setup() function.

Using processing 3.0 render pipeline with a custom projection matrix

My goal is only to use the current processing PGraphcsOpenGL rendering pipeline replacing the camera or projection transformation matrix. The current code returns me strange results. I can't understand how processing is multiplying matrices internally or even if this is possible to achieve what I want this way.
All closest references I found are not opengl compatible. Another solution would probably be decomposing this matrix to extract camera parameters and set the camera object every frame. I couldn't make it work. But my first attempts increased the number of lines considerably.
PMatrix3D p;
void setup() {
size(600, 400, P3D);
p = new PMatrix3D(
5.400566, 0.519709, -4.3888016, 193.58757,
5.284709, -9.016302, 3.312224, 266.927,
0.012042404, 7.253584E-5, 0.0084899925, 1.0,
0, 0, 0, 1);
p.invert();
}
void draw() {
float x = map(mouseX, 0, width, -200, 200);
float z = map(mouseY, 0, height, -150, 150);
((PGraphicsOpenGL) g).camera.set(p);
//?
//((PGraphicsOpenGL) g).modelview.set(p);
//((PGraphicsOpenGL) g).projection.set(p);
background(20);
lights();
translate(width/2, height/2);
translate(x, 0, z);
box(100);
}
EDIT
Below is a code where I multiply each vertex directly with the matrix. Results are quite different from the previous code.
Processing has its own native rendering pipeline including a camera, lighting, shaders, etc. I didn't want to build a new shader just because I am not being able to set the camera matrix. However this is what I am doing now for my project. And by doing so I drop my framerate by half.Terrible.
PMatrix3D p;
void setup() {
size(600, 400, P3D);
p = new PMatrix3D(
5.400566, 0.519709, -4.3888016, 193.58757,
5.284709, -9.016302, 3.312224, 266.927,
0.012042404, 7.253584E-5, 0.0084899925, 1.0,
0, 0, 0, 1);
//p.invert();
}
void draw() {
float x = map(mouseX, 0, width, -500, 500);
float z = map(mouseY, 0, height, -500, 500);
//((PGraphicsOpenGL) g).camera.set(p);
//((PGraphicsOpenGL) g).modelview.set(p);
//((PGraphicsOpenGL) g).projection.set(p);
background(50);
lights();
//translate(width/2, height/2);
translate(x, 0, z);
stroke(0);
strokeWeight(1);
//box(100);
myBox(true);
}
void myBox(boolean multiplyed) {
float side = 10;
float h = side/2;
/* cube verteces
-----5------
--1-------6-
--------2---
-----7------
--3-------8-
--------4---
*/
PVector [] vtx = {
new PVector (-h, -h, -h),
new PVector (+h, -h, -h),
new PVector (-h, +h, -h),
new PVector (+h, +h, -h),
new PVector (-h, -h, +h),
new PVector (+h, -h, +h),
new PVector (-h, +h, +h),
new PVector (+h, +h, +h),
};
strokeWeight(5);
for(PVector v : vtx){
if(multiplyed){
PVector result = new PVector();
PMatrix3D mat = p;
mat.mult(v, result);
stroke(#ff0000);
point(result.x/result.z ,result.y/result.z) ;
}else{
point(v.x,v.y,v.z);
}
}
}
I did a bit of research on it and found out this about OpenGL:
"The renderer class, PGraphicsOpenGL, exposes a function that can be used for that purpose, called textureSapling(int mode), where mode can take the values 2 (nearest), 3 (linear), 4 (bilinear) and 5 (trilinear).
Research taken from https://github.com/processing/processing/wiki/Advanced-OpenGL, edited by technosalon.
Because you only have those options, try changing your mode variable to a higher number to see if that works.

Can't set nofill() when using PGraphics

I Can't set noFill while rendering to a PGraphics object.
Trying to draw an arc gets me this.
While what i want is this.
I used the following code in the processing application for 64 bit windows 7
PGraphics pg;
void setup() {
size(123, 123);
pg = createGraphics(123, 123);
pg.strokeWeight(5);
pg.stroke(255);
pg.noFill();
noFill();
}
void draw() {
pg.beginDraw();
pg.background(0);
pg.translate(width/2, height/2);
pg.arc(0, 0, 100, 100, 0, PI+1);
pg.endDraw();
image(pg, 0, 0);
}
It is better to set modes and style for PG inside draw block and it works like you want:
pg.beginDraw();
pg.background(0);
pg.strokeWeight(5);
pg.stroke(255);
pg.noFill();
pg.translate(width/2, height/2);
pg.arc(0, 0, 100, 100, 0, PI+1);
pg.endDraw();

How to force 24 bit color depth in OpenGL ES

I am trying to load and display a texture in OpenGL ES. The problem I am having is that even though my image is in ARGB_8888 format, the texture seems to be drawn in RGB_565 format. Without dithering, my image looks pretty terrible.
I am running my program on a phone which supports 16m colors, therefore, the texture should be view-able in all it's original glory.
EDIT code:
loading bitmap:
background = BitmapFactory.decodeResource(getResources(), R.drawable.background, null);
generating texture:
public void loadBackground(GL10 gl) {
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D,0, background,0);
background.recycle();
}
drawing:
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, backgroundVertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0,4);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
onSurfaceCreated:
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glEnable(GL10.GL_TEXTURE_2D);
}
onSurfaceChanged
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glLoadIdentity();
gl.glOrthof(0, width, height, 0, -1, 1);
By default the GLSurfaceView is using RGB_565 for its pixel format, so you need to specify that you want a 32 bit surface before you bind the renderer. More info at http://developer.android.com/reference/android/opengl/GLSurfaceView.html , look at one of the setEGLConfigChooser methods.

Processing: How to split screen?

I'm trying to create a multi-player game with Processing, but can't figure out how to split screen into two to display different situation of the players?
like in c#,we have
Viewport leftViewport,rightViewport;
to solve the problem.
Thanks a lot
In processing all drawing operations like rect, eclipse etc. are done on a PGraphics element. You could create two new PGraphic objects with the renderer of your choice, draw on them and add them to your main view:
int w = 500;
int h = 300;
void setup() {
size(w, h);
leftViewport = createGraphics(w/2, h, P3D);
rightViewport = createGraphics(w/2, h, P3D);
}
void draw(){
//draw something fancy on every viewports
leftViewport.beginDraw();
leftViewport.background(102);
leftViewport.stroke(255);
leftViewport.line(40, 40, mouseX, mouseY);
leftViewport.endDraw();
rightViewport.beginDraw();
rightViewport.background(102);
rightViewport.stroke(255);
rightViewport.line(40, 40, mouseX, mouseY);
rightViewport.endDraw();
//add the two viewports to your main panel
image(leftViewport, 0, 0);
image(rightViewport, w/2, 0);
}

Resources