How to move all pixels down? - processing

I drew multiple white pixels on a black canvas to get a night sky. I gave the stars random positions and now want that all pixels move down in-order to imitate the movement of the earth.
I tried Translate but that doesn't seem to work with pixels.
Is there a way to move all the Pixels in the canvas down?

arrayCopy(pixels, 0, pixels, width, (height - 1) * width);
Should solve the problem you have. For more help about arrayCopy look here: https://processing.org/reference/arrayCopy_.html

Basically, the process of creating an animation is this:
Store your state in variables, or in a buffer.
Use those variables to draw your scene every frame.
Change those variables over time to change your scene.
One approach is to draw your stars to a buffer. The createGraphics() function is your friend. Then draw that buffer to the screen using the image() function. Then move the y position of the buffer down by some amount each frame.
Another approach is to store your star positions in a set of variables, such as an ArrayList of PVector instances. Draw those positions to the screen, and move each one down a bit each frame.
The translate() function should work fine for points, and it's just another approach to the steps I outlined above. As is Tobias's answer. There are a bunch of different ways to do this. If you're still having trouble, please post a MCVE in a new question post. Good luck.
Shameless self-promotion: I wrote a tutorial on creating animations in Processing available here.

Related

Need line drawing algorithm for simulating natural pencil

I'm writing a drawing program that uses a pressure sensitive table for input. I'd like to be able to simulate the soft pencil effect that many other art programs have (such as Paint Tool SAI, Art Rage). Technique I'm using at the moment is functional, but is missing the cleanness I see in more professional programs.
My algorithm at the moment works like this:
Create a bitmap representing the head of the brush. This is just a transparent bitmap with a black circle drawn on it. The circle has an inner radius that is solid black and an outer radius. The blackness linearly fades from opaque to transparent as you move from the inner to the outer radius.
Capture input events from my tablet. Each point contains an (x, y) coordinate as well as a pressure value
For every point after the first one, draw a line from the previous point to the current one. This is done by drawing (daubing) the brush bitmap several times between the two points. The step size between each daub is chosen so there is an overlap between subsequent daubs.
This works reasonably well, but the result is a line that is somewhat blobby and jagged.
One thing I need to do is somehow smooth out the input points so that the stroke as a whole is smooth.
The other thing I need to do is figure out how to 'drag' the brush head along this path to make the stroke. If the spacing is too far apart, the stroke looks like a line of circles. If too close together, the stroke builds up on itself and becomes very dark. (I tried to fix this by attenuating the brush by the spacing. This does make things more consistent, but stops the stroke from being fully opaque).
Anyhow, I'd expect that there's a lot of research already done on this, if only I knew where to look. Please let me know if there are any better pencil drawing algorithms out there.
Instead of drawing the new circle over what has already been drawn, using the standard blending functions (so that regions of overlap get a higher opacity), you need to keep the maximum opacity so far.
Only after you have built up the complete stroke (as on a white sheet), you can blend it to the existing line art.
The picture illustrates the difference between blending and keeping the maximum opacity.

three.js - Overlapping layers flickering

When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.

WebGL- After rendering the mesh correctly some triangles disappear

My problem is the following. I have a canvas in which I am drawing a piece using WebGL.
When it renders, it is fine.
But then, two seconds later or so, without moving the camera or anything, some of the triangles disappear.
And after moving the camera or something, the triangles that are gone stays the same (I have read that in some cases is due to the buffer and the distance to the object so by zooming in or out the triangles that are gone can change).
What could be the problem?
I am applying both color and texture to each element in order to print black lines around each "square" (my texture is a square with black border and white inside). That means that the final color is computed in the fragment shader by multiplying the color times the texture. That also means that some of the nodes are duplicated or even more (in order to give the TextureVertex attribute to a node I need a different node as it belongs to each element) It is important to notice that, when I create a mesh with less number of nodes, they do not disappear. Anyway, I have seen WebGL examples on the net very complex, and I may have just 1000 nodes so I don't think it could be a problem of my graphic hardware.
What do you think could be the problem? How would you solve it? If you require more info just let me know. I didn't include code because it seems to be rendered OK at the beginning, and furthermore I only have this problem with "big" meshes.
Thanks for the comment. please find here both images:
First draw
A few seconds later.
EDITED: Im gonna give some more details in case this helps to find the problem. I will give you the information regarding one of the squares (the rest of the squares would follow same scheme). Notice that they are defined in the code behind as public variables and then I pass them to the html script:
Nodes for vertex buffer:
serverSideInitialCoordinates = {
-1.0,-1.0,0.0,
1.0,-1.0,0.0,
1.0,1.0,0.0,
-1.0,1.0,0.0,
0.0,-1.0,0.0,
1.0,0.0,0.0,
0.0,1.0,0.0,
-1.0,0.0,0.0,
0.0,0.0,0.0,
};
Connectivity to form triangles:
serverSideConnectivity = {
0,4,8,
0,8,7,
1,5,8,
1,8,4,
2,6,8,
2,8,5,
3,7,8,
3,8,6
};
Colors:not relevant.
TextureVertex:{
0.0,0.0
1.0,0.0
1.0,1.0
0.0,1.0
0.5,0.0
1.0,0.5
0.5,1.0
0.0,0.5
0.5,0.5
};
As I mentioned I have an image which is white with just few pixels black around the borders. So in the fragment shader I have something similar to this:
gl_FragColor = texture2D(u_texture, v_texcoord) * vColor;
Then I have a function that loads the image and gets the texture.
In the function InitBuffers I create the buffers and assign to them the vertexPosition, The colors and the connectivity of the triangles.
Finally in the Draw function I bind the buffers again : vertexPosition, color (bind it as colorattribute), texture (bind it as textureVertex), and connectivity, and then set Matrix Uniform and draw. I don think the problem is here because it works fine for smaller meshes, but I still dont know why it doesn't for larger ones. I thought maybe performance of firefox is worse than other browsers' but then I ran on firefox difficult WebGL models I found on the web and they work fine, no triangles missing. If I print same objects without the texture (just colors) it works fine and no triangles are missing. Do you think that maybe it takes a lot of effort for the shader to get the color everytime by multiplying both things? Can you think of another way?
My idea was just to draw black lines between some nodes instead of using a complete texture, but I cant get it working, either I draw the triangles or I draw the lines but it doesn't allow me to draw both at same time. If I put code for both, only the last "elements" are drawn.

User interaction in Processing

I have a general question (I know I should present specific code with a problem, but in my case the problem is of a more general nature).
In Processing, let's say I make an ellipse:
ellipse(30, 30, 10, 10);
Now, is there a way to get the pixels where this ellipse is on the canvas? The reason would be to have a way of creating user interaction with the mouse (for instance). So when someone clicks the mouse over the ellipse, something happens.
I thought of turning everything into objects and use a constructor to somehow store the position of the shape, but this is easier said than done, particularly for more complex shapes. And that is what I am interested in. It's one thing to calculate the position of an ellipse, but what about more complex shapes? Are there any libraries?
Check out the geomerative library. It has a way to check whether the mouse is inside any SVG shape. I can't remember off the top of my head but it works something like you make a shape:
myShape = RG.loadShape("shape.svg");
and a point:
RPoint p = new RPoint(mouseX, mouseY);
and the boolean function contains() will tell you if the point is inside the shape:
myShape.contains(p);
It's better to use a mathematical formula than pixel-by-pixel checking of the mouse position (it's much faster, and involves less code).
For a perfect circle, you can calculate the Euclidean distance using Pythagoras' theorem. Assume your circle is centred at position (circleX,circleY), and has a radius (not diameter) of circleR. You can check if the mouse is over the circle like this:
if(sq(mouseX-circleX)+sq(mouseY-circleY) <= sq(circleR)) {
// mouse is over circle
} else {
// mouse is not over circle
}
This approach basically imagines a right-angled triangle, where the hypotenuse (the longest side) runs from the centre of the circle to the mouse position. It uses Pythagoras' theorem to calculate the length of that hypotenuse, and if it's less than the circle's radius then the mouse is inside the circle. (It includes a slight optimisation though -- it's comparing squares to avoid doing a square root, as that can be comparatively slow.)
An alternative to my original mathematical answer also occurred to me. If you can afford the memory and processing power of drawing all your UI elements twice then you can get good results by using a secondary buffer.
The principle involves having an off-screen graphics buffer (e.g. using PGraphics). It must be exactly the same size as the main display, and have anti-aliasing disabled. Draw all your interactive UI elements (buttons etc.) to this buffer. However, instead of drawing them the normal way, give each one a unique colour which it uses for fill and stroke (don't add any text or images... just solid colours). For example, one button might be entirely red, and another entirely green. Any other RGB value works, as long as each item has a unique colour. Make sure the background has a unique colour too.
The user never sees that buffer, so don't draw it to the screen (unless you're debugging or something). When you want to detect what item the mouse is over, just lookup the mouse position on that off-screen buffer. Get the pixel colour at that location, and match it to the UI element.
After you've done all that, just go ahead and draw everything to the main display as normal.
It's worth noting that you can cut-down the processing time of this approach a lot if your UI elements never (or rarely) move. You only need to redraw the secondary buffer when something appears/disappears, animates, or changes size/position.

Is there a fast way to grab an area of an OpenGL-ES scene and render that area as a picture-in-picture?

I have a bunch of game elements being drawn to the screen with OpenGL-ES and I'd like to be able to render a small rectangle in the bottom corner of the screen that shows, say, what's presently being displayed in the top left quarter of the screen.
In that way it's similar to a picture-in-picture from a tv, only the smaller picture would be showing part of the same thing the bigger picture is showing.
I'm comfortable with scaling in OpenGL-ES, but what I don't know how to do is get the proper rectangle of renderbuffer data and use that chunk as the data for an inset frame buffer for the next render pass. I imagine there's some trick along these lines to do this efficiently.
I've tried re-rendering the game elements at a smaller scale for this inset window and it just seems horribly inefficient when the data is already elsewhere and just needs to be scaled down a bit.
I'm not sure I'm asking this clearly or in the right terms, So any and all illumination is welcome and appreciated - especially examples. Thank you!
Have a look at glCopyTexImage2D. It lets you copy a portion of the framebuffer into a texture. So the order of operation would be:
Draw your scene normally
Bind your picture-in-picture texture
glCopyTexImage2D
Draw a quad with that texture in the bottom corner

Resources