Choosing a renderer - difference between default and J2D? - processing

The Processing size() documentation says:
In addition to the default renderer, other renderers are: P2D, P3D, PDF
So what's the difference between Default and J2D?
There used to be JAVA2D, P2D, P3D and OPENGL before v2 and I believe that P3D is now just OPENGL.
This link has some relevant info Life Saving Tips For Processing

There are 4 render modes in Processing 2.0:
default ("slow" but very accurate 2D render mode)
P2D (OPENGL, faster but less accurate 2D render mode)
P3D (OPENGL and well, 3D)
PDF (for PDF output)
default | P2D | P3D
Code I used to create these images:
void setup() {
//size(200, 200);
//size(200, 200, P2D);
size(200, 200, P3D);
}
void draw() {
background(153);
strokeWeight(10);
ellipse(100, 100, 100, 100);
}
You can find a more detailed explanation including a guide on choosing the right mode at What is P3D?

Related

Codepen prevents p5.dom webcam capture

I developed some working code on the p5.js Web editor and wanted to port it to Codepen for further work. Even the following example from the processing website fails to work:
let capture;
function setup() {
createCanvas(390, 240);
capture = createCapture(VIDEO);
capture.size(320, 240);
//capture.hide();
}
function draw() {
background(255);
image(capture, 0, 0, 320, 240);
filter(INVERT);
}
When I try to run the code, I enable my webcam which turns on for about 2 seconds before shutting down. No image is captured. There is no error message in the console. I have found some pens that do have working webcams (and others that seem to have the same issue), but none that I saw tried to do it through the p5.dom library.
Is this a known issue? Are there any workarounds?

React Native: Creating an Animated View that slides directly with finger drag

Desired Effect:
I want to create a basic animation of a smooth sliding View (left or right) in sync with the drag pace of my finger.(e.g. sliding an off- canvas menu into and out of view)
Currently I'm using Animated.spring in a parent component that handles all gestures. Then I'm using transform,translateX in a child component to slide a View left or right.
For Example:
Root.js(Parent Component that handles gestures)
_handlePanResponderMove(e: Object, gestureState: Object) {
let dx = gestureState.moveX - this.state.swipeStart;
Animated.spring(
this.state.slideValue,
{
toValue: newValue,
velocity: gestureState.vx,
tension: 2,
friction: 8,
}).start();
}
Navigation.js(Child Component that slides)
render(){
<Animated.View style={[styles.navigation,{transform: [{translateX: this.props.slideValue}]}]}>
<View >
</View>
</Animated.View>
}
The Problem:
There is sticky/lagging behavior with animation instead of a smooth movement that paces the finger sliding guesture.
My reasoning so far:
From my limited Animation experience - Animated.spring, Animated.ease and Animated.timing don't really describe well the equally paced sliding movement I'm after - but I suppose I need to be using one of them to get optimized native performance
(otherwise I'd just use .setState(slideValue) and do some math with the current location of my finger to figure the position of the View.)
Question:
What would be the preferred way to describe this type of smooth sliding animation using the optimized React-Native Animated library?
What I've tried out:
1) Using Animated.timing and setting duration low and easing to linear(my best guess at what I should do)
Animated.timing(
this.state.navSlideValue,
{
toValue: newValue,
duration: 10,
easing: Easing.linear
}).start();
2) Moving up the tension on Animated.spring
Animated.spring(
this.state.navSlideValue,
{
toValue: newValue,
velocity: (gestureState.vx),
tension: 300,
friction: 30,
}).start();
The preset functions timing, spring are useful for running animations to a specified value with a given easing. If you want to tie an Animated value to an event, you can use Animated.event instead:
PanResponder.create({
// on each move event, set slideValue to gestureState.dx -- the
// first value `null` ignores the first `event` argument
onPanResponderMove: Animated.event([
null, {dx: this.state.slideValue}
])
// ...rest of your panResponder handlers
});

Access sprites in Canvas?

Targeting HTML5 in OpenFL is easy, however, I could not add glowing effects to sprites, I am thinking of a work around, that is using JavaScript library to add webGL effects to the sprites in Canvas.
But, the question is, how would I access the sprites inCanvas using JavaScript? and what tool to use to inspect sprites in Canvas ?
First of, since version 4.0 openfl uses webgl renderer by default and adding glow effect in that case requires a 'deep dive' into openfl/lime source code and i can't give you that.
But if that's suitable for you, force openfl to use canvas fallback with
<haxedef name="canvas"/>
and then you can create custom Bitmap class (for example) with glow effect like so
import openfl.display.Bitmap;
import openfl.display.BitmapData;
import openfl.display.PixelSnapping;
import openfl._internal.renderer.cairo.CairoBitmap;
import openfl._internal.renderer.canvas.CanvasBitmap;
import openfl._internal.renderer.dom.DOMBitmap;
import openfl._internal.renderer.opengl.GLBitmap;
import openfl._internal.renderer.RenderSession;
class HackyBitmap extends Bitmap
{
/**
* Custom property for tweening
*/
public var glowBlur:Float = 0.0;
public function new(bitmapData:BitmapData=null, pixelSnapping:PixelSnapping=null, smoothing:Bool=false)
{
super(bitmapData, pixelSnapping, smoothing);
}
public override function __renderCanvas (renderSession:RenderSession):Void {
var context = renderSession.context;
if (glowBlur > 0)
{
context.save();
context.shadowBlur = glowBlur;
context.shadowColor = "white";
}
CanvasBitmap.render (this, renderSession);
if (glowBlur > 0)
{
context.restore();
}
}
}
Usage
var data = Assets.getBitmapData("img/berry.png");
var hacky = new HackyBitmap(data);
hacky.x = hacky.y = 20;
addChild(hacky);
//to animate glow property
Actuate.tween(hacky, 1.0, { glowBlur: 50 }).repeat().reflect();
OpenFL/Lime versions
lime 3.2.0
openfl 4.2.0
How it looks
As said in my comment, Sprites don't exist in JS/HTML, it's an abstraction on top of Canvas that ends up calling .drawImage, thus, you won't find it in any browser developer tools. The only way you can 'access' these sprites is just by using what OpenFL provides - your sprite objects, BitmapData, etc.
The method you tried to get a glow was probably through the GlowFilter class, which is wrongly described as 'Avaliable on all platforms' in the docs. This class isn't implemented with the Canvas element yet, but could have been.
However, WebGL exists for OpenFL, it isn't all canvas. That's why shaders are possible. There is a sort of glow filter for webgl, inbuilt into openfl, first shown in this commit of this pull request. So using shaders might be your best bet.
Unfortunatly, shaders don't work with Canvas. I feel like doing glows in canvas would be your best approach, but it's not yet implemented.

uploading Image texture creating moire

We have implemented FineUploader and are running into an issue with some images that our clients are uploading. For large image files with a repeated canvas texture, FineUploader resizes the images fine, but a moiré is introduced into the final image. Is there any way to help prevent this from happening?
Here is an example:
http://205.238.27.187/Hagan/site/Artwork-Detail.cfm?ArtistsID=1110&NewID=10709
This is not because of the quality setting.
Most browsers use linear interpolation rather than bicubic when resizing images.
Fine Uploader uses the default browser image resizing algorithm.
The solution I've found is limby-resize. It uses pixel averaging/a much better resizing algo but is more CPU intensive. There is a link to a demo in the readme file. (Fine Uploader uses the "Crappy" method)
In megapix-image.js around line #168 or in the fine uploader source code,
Replace:
else {
ctx.drawImage(img, 0, 0, width, height);
}
canvas.qqImageRendered && canvas.qqImageRendered();
With:
else {
var tmpCanvas = document.createElement("canvas");
tmpCanvas.width = iw;
tmpCanvas.height = ih;
var tmpCtx = tmpCanvas.getContext("2d");
tmpCtx.drawImage(img, 0, 0);
canvasResize(tmpCanvas, canvas, function () {
alert("Image resized by limby-resize");
canvas.qqImageRendered && canvas.qqImageRendered();
});
}
And include limby-resize's canvas_resize.js before the fine uploader js file in your HTML.

Rendering Mesh from TextureRegion renders whole TextureAtlas

I have a simple render method that I want to use to draw a simple mesh. The code below, using the TextureRegion class instead of Texture renders a dirty and broken mesh. Its seems that the whole Texture Atlas from which the region is extracted is rendered instead of the texture region:
public void (SpriteBatch batch, OrthographicCamera camera){
mesh.setVertices(vert);
mesh.setIndices(indi);
Gdx.gl20.glEnable(GL20.GL_BLEND);
Gdx.graphics.getGL20().glEnable(GL20.GL_TEXTURE_2D);
shader.begin();
shader.setUniformMatrix("u_worldView", camera.combined);
shader.setUniformi("u_texture", 0);
textureRegion_laser.getTexture().bind(0);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Gdx.gl20.glDisable(GL20.GL_BLEND);
Gdx.graphics.getGL20().glDisable(GL20.GL_TEXTURE_2D);
}
The code below which uses a Texture instead of of a TextureRegion draws the mesh correctly. What's the pb?
public void (SpriteBatch batch, OrthographicCamera camera){
mesh.setVertices(vert);
mesh.setIndices(indi);
Gdx.gl20.glEnable(GL20.GL_BLEND);
Gdx.graphics.getGL20().glEnable(GL20.GL_TEXTURE_2D);
shader.begin();
shader.setUniformMatrix("u_worldView", camera.combined);
shader.setUniformi("u_texture", 0);
texture.bind(0);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Gdx.gl20.glDisable(GL20.GL_BLEND);
Gdx.graphics.getGL20().glDisable(GL20.GL_TEXTURE_2D);
}

Resources