how do I set quad buffering with jogl 2.0 - processing

I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem.
I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library.
Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering.
Here's the previous code:
public class Theatre extends PGraphicsOpenGL{
protected void allocate()
{
if (context == null)
{
// If OpenGL 2X or 4X smoothing is enabled, setup caps object for them
GLCapabilities capabilities = new GLCapabilities();
// Starting in release 0158, OpenGL smoothing is always enabled
if (!hints[DISABLE_OPENGL_2X_SMOOTH])
{
capabilities.setSampleBuffers(true);
capabilities.setNumSamples(2);
}
else if (hints[ENABLE_OPENGL_4X_SMOOTH])
{
capabilities.setSampleBuffers(true);
capabilities.setNumSamples(4);
}
capabilities.setStereo(true);
// get a rendering surface and a context for this canvas
GLDrawableFactory factory = GLDrawableFactory.getFactory();
drawable = factory.getGLDrawable(parent, capabilities, null);
context = drawable.createContext(null);
// need to get proper opengl context since will be needed below
gl = context.getGL();
// Flag defaults to be reset on the next trip into beginDraw().
settingsInited = false;
}
else
{
// The following three lines are a fix for Bug #1176
// http://dev.processing.org/bugs/show_bug.cgi?id=1176
context.destroy();
context = drawable.createContext(null);
gl = context.getGL();
reapplySettings();
}
}
}
This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre").
Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying:
PGraphicsOpenGL pg = ((PGraphicsOpenGL)g);
pgl = pg.beginPGL();
gl = pgl.gl;
glu = pg.pgl.glu;
gl2 = pgl.gl.getGL2();
GLProfile profile = GLProfile.get(GLProfile.GL2);
GLCapabilities capabilities = new GLCapabilities(profile);
capabilities.setSampleBuffers(true);
capabilities.setNumSamples(4);
capabilities.setStereo(true);
GLDrawableFactory factory = GLDrawableFactory.getFactory(profile);
If I go on, I should do something like this:
drawable = factory.getGLDrawable(parent, capabilities, null);
but drawable isn't a field anymore and I can't find a way to do it.
How do I set quad buffering?
If I try this:
gl2.glDrawBuffer(GL.GL_BACK_RIGHT);
it obviously doesn't work :/
Thanks.

Related

programmatically detect rendering mode in p5js?

for the p5js rendering engine, if in setup() function I use WEBGL vs P2D, how can I know later in my code what rendering mode I am in? I have wrote generic functions that work across 2D and 3D modes and I want the code to execute in different ways based on the rendering mode.
There probably are more straightforward and elegant ways of doing it but, in a pinch, you can read the drawingContext of the renderer used and see if it's either an instance of WebGLRenderingContext or CanvasRenderingContext2D
const webglSketch = p => {
p.setup = () => {
p.createCanvas(100, 100, p.WEBGL)
p.background('red')
console.log('WEBGL?', p._renderer.drawingContext instanceof WebGLRenderingContext)
console.log('2D?', p._renderer.drawingContext instanceof CanvasRenderingContext2D)
}
}
const twoDSketch = p => {
p.setup = () => {
p.createCanvas(100, 100)
p.background('blue')
console.log('WEBGL?', p._renderer.drawingContext instanceof WebGLRenderingContext)
console.log('2D?', p._renderer.drawingContext instanceof CanvasRenderingContext2D)
}
}
new p5(webglSketch)
new p5(twoDSketch)
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.0.0/p5.min.js"></script>
If you're not using the instance mode, just check the _renderer global object.

How to get smooth animation in SkiaSharp

I am trying to achieve smooth animation using Xamarin SkiaSharp. The core issue is the the time between calling canvasView.InvalidateSurface(); and hitting the mathod OnCanvasViewPaintSurface to do the redraw can vary from 3 to 30 ms which gives a somewhat jerky appearance when you are moving an object across the screen. I have tried to mitigate this by adding a dead loop in the draw code, which helps some but is not a great solution. I do not understand why the time varies so much, and I do not see any way around this. You cannot put a sleep in the draw code. How do games achieve smooth animation? My code follows
async Task DoAnimationLoop()
{
while (DoAnimation)
{
AccumulatedTime = StopWatch1.ElapsedMilliseconds;
await Task.Delay(TimeSpan.FromMilliseconds(5));
if (AccumulatedTime > 50)
{
StopWatch1.Restart();
MoveItems();
SKCanvasView canvasView = Content as SKCanvasView;
TotalBounds = new Size(canvasView.Width,
canvasView.Height);
canvasView.InvalidateSurface();
}
}
}
private void OnCanvasViewPaintSurface(object sender,
SKPaintSurfaceEventArgs e)
{
AccumulatedTime = StopWatch1.ElapsedMilliseconds;
while (AccumulatedTime < 30)
{
AccumulatedTime = StopWatch1.ElapsedMilliseconds;
}
e.Surface.Canvas.Clear();
e.Surface.Canvas.DrawBitmap(Background, 0, 0);
foreach(Item item in AllItems)
{
e.Surface.Canvas.DrawBitmap(item.CurrentBitmap,
item.CurrentPositionX, item.CurrentPositionY);
}
}
For future readers:
From experience, I get the smoothest animations with SkiaSharp by creating SkiaSharp SKCanvasViews that have Bindable properties that can be incremented with Xamarin.Forms.Animate. Animate handles all the timing and sleeping code based on variables you configure it with. As you want a loop, you can set the repeat delegate to return true when calling Animate.Commit( ... repeat: () => true ...)
Here is a method example that animates a progress bar "filling" to 100 percent (without looping), by incrementing the ProgressBar's PercentageFilled Property. Note the timing settings you can configure: Refresh rate = 11ms (equates to "90 fps": 1000ms/90 = 11.11), timeToAnimate is the length of time the animation should take to complete in ms, and you can choose from several easing functions.
private void AnimateProgressBar()
{
double startPercentage = 0; //start at 0 percent
double endPercentage = 1; //fill to 100 percent (Forms.Animate will evenly increment
//between 0 and 1 , and in this case the ProgressBar's OnPaintSurface method knows how to draw based
//on the given decimal i.e. if PercentageFilled is .5 it will draw the bar to
//50 percent of its possible max length)
uint timeToAnimate = 1000;
Xamarin.Forms.Animation animation = new Animation(v => _ProgressBar.PercentageFilled = (float)v, startPercentage, endPercentage, easing: Easing.CubicOut);
animation.Commit(_ProgressBar, "FillPercentage", length: timeToAnimate, finished: (l, c) => animation = null, rate: 11);
}
When the PercentageFilled Property is changed it triggers InvalidateSurface by placing a call to InvalidateSurface() within the OnPropertyChanged method. To do this override OnPropertyChanged like so in your SKCanvasView derived class:
class ProgressBar: SKCanvasView
{
//...
public BindableProperty PercentageFilledProperty =
BindableProperty.Create(nameof(PercentageFilled), typeof(float), typeof(ProgressBar), 0f);
public float PercentageFilled
{
get { return (float)GetValue(PercentageFilledProperty ); }
set { SetValue(PercentageFilledProperty , value); }
}
protected override void OnPropertyChanged(string propertyName = null)
{
base.OnPropertyChanged(propertyName);
InvalidateSurface();
}
protected override void OnPaintSurface(SKPaintSurfaceEventArgs args)
{
//Draws progress bar based on the PercentageFilled filled property
}
//....
}
In the question's code it appears that a lot of items are being moved (MoveItems();) in sequence and this technique of drawing items one by one moving might be the cause of jitter? But MoveItems appears to be the place where you might want to use Forms.Animate that is you could Create a Forms.Animation with MoveItems() as the Callback. Look into the microsoft documentation for "Custom Animations in Xamarin.Forms" for more info on how to animate with callbacks.
Also check out "The basics to create custom Xamarin.Forms controls using SkiaSharp" by Konrad Müller on Medium which contains this helpful paragraph which might be useful to consider if you are making a game:
The basic SkiaSharp.Views.Forms provides two views you can use as a
base for your controls: SKCanvasView and SKGLView. The CanvasView uses
the CPU accelerated backend while the GLView uses OpenGL and therefore
the GPU. You might intuitively think that using the GPU for graphics
operations is always the better choice but in fact, OpenGL has a high
overhead when creating the GL context. The CanvasView simply allocates
the memory it needs and as long as enough CPU power is available, it
can render without problems. In theory, the CanvasView should be
better suited for less demanding renderings and GlView better for
complex renderings but the power of modern smartphones makes these
differences mostly unnoticable. I would recommend to simply stick to
the CanvasView and switch to the GlView if the rendering gets to
complex and you notice performance problems.

Threejs: mesh standard material reflection issues

I've stumbled upon the problem that some browsers and devices render the MeshStandardMaterial reflection poorly.
Consider the example below:
and this example below:
Both comparisons are running simultaneously on the same computer, same graphics card, identical attributes, but different browsers. As you can see, the reflections on the right are almost unidentifiable.
Additionally, I'm getting some triangulation issues at sharp angles that make it seem as if the reflection is being calculated in the vertex shader:
I understand that different browsers have different WebGL capabilities, as the results on http://webglreport.com/ illustrate:
Does anybody know what WebGL extension or feature the IE/Edge browsers are missing that I can look for? I want to put a sniffer that uses a different material if it doesn't meet the necessary requirements. Or if anybody has a full solution, that would be even better. I've already tried playing with the EnvMap's minFilter attribute, but the reflections are still calculated differently.
I don't know which extensions are needed but you can easily test. Before you init THREE.js put some code like this
const extensionsToDisable = [
"OES_texture_float",
"OES_texture_float_linear",
];
WebGLRenderingContext.prototype.getExtension = function(oldFn) {
return function(extensionName) {
if (extensionsToDisable.indexOf(extensionName) >= 0) {
return null;
}
return oldFn.call(this, name);
};
}(WebGLRenderingContext.prototype.getExtension);
WebGLRenderingContext.prototype.getSupportedExtensions = function(oldFn) {
return function() {
const extensions = oldFn.call(this);
return extensions.filter(e => extensionsToDisable.indexOf(e) < 0);
};
}(WebGLRenderingContext.prototype.getSupportedExtensions);
Then just selectively disable extensions until Firefox/Chrome look the same as IE/Edge.
The first thing I'd test is disabling every extension that's in Chrome/Firefox that's not in IE/Edge just to verify that turning them all off reproduces the IE/Edge behavior.
If it does reproduce the issue then I'd do a binary search (turn on half the disabled extensions), and repeat until I found the required ones.
const extensionsToDisable = [
"EXT_blend_minmax",
"EXT_disjoint_timer_query",
"EXT_shader_texture_lod",
"EXT_sRGB",
"OES_vertex_array_object",
"WEBGL_compressed_texture_s3tc_srgb",
"WEBGL_debug_shaders",
"WEBKIT_WEBGL_depth_texture",
"WEBGL_draw_buffers",
"WEBGL_lose_context",
"WEBKIT_WEBGL_lose_context",
];
WebGLRenderingContext.prototype.getExtension = function(oldFn) {
return function(extensionName) {
if (extensionsToDisable.indexOf(extensionName) >= 0) {
return null;
}
return oldFn.call(this, name);
};
}(WebGLRenderingContext.prototype.getExtension);
WebGLRenderingContext.prototype.getSupportedExtensions = function(oldFn) {
return function() {
const extensions = oldFn.call(this);
return extensions.filter(e => extensionsToDisable.indexOf(e) < 0);
};
}(WebGLRenderingContext.prototype.getSupportedExtensions);
const gl = document.createElement("canvas").getContext("webgl");
console.log(gl.getSupportedExtensions().join('\n'));
console.log("WEBGL_draw_buffers:", gl.getExtension("WEBGL_draw_buffers"));

Custom FireMonkey component does not get drawn

Working in C++Builder 10.2 Tokyo, I am trying to add a custom component to a FireMonkey TForm programmatically at runtime.
The custom component is not installed as a package and registered in the IDE (as that ended up complicating the project too much), rather it is simply a subclass of TPanel.
However, The component, and its children, do not get drawn when I run the application. I have tested this on Windows and Android, and tried multiple modifications, like setting the Width and Height explicitly.
How can I fix this?
Below is the relevant bit of my code:
__fastcall TForm1::TForm1(TComponent* Owner)
: TForm3D(Owner)
{
mkView = new MKView(this);
mkView->Align = TAlignLayout::Client;
mkView->Enabled = true;
mkView->Visible = true;
mkView->Parent = this;
}
__fastcall MKView::MKView(TComponent *Owner)
: TPanel(Owner)
{
this->OnMouseDown = MKView_OnMouseDown;
TLabel1 = new TLabel(this);
TLabel1->Text = "Here I am!";
TLabel1->Enabled = true;
TLabel1->Visible = true;
TLabel1->Parent = this;
TLabel1->OnMouseDown = MKView_OnMouseDown;
}
It looks like TForm3D doesn't work well with standard FireMonkey components, as it is designed for rendering FireMonkey 3D components and uses OnRender() instead of OnPaint(). I was using TForm3D for its OpenGL context, but having switched to a standard TForm the components are now being drawn.

Load texture resized in XNA

I'm developing for Windows Phone XNA and would like to load textures with a smaller size to decrease memory impact where the full image isn't required.
My current solution is to use a rendertarget to draw and return that rendertarget as a smaller texture to use:
public static Texture2D LoadResized(string texturePath, float scale)
{
Texture2D texLoaded = Content.Load<Texture2D>(texturePath);
Vector2 resizedSize = new Vector2(texLoaded.Width * scale, texLoaded.Height * scale);
Texture2D resized = ResizeTexture(texLoaded, resizedSize);
//texLoaded.Dispose();
return resized;
}
public static Texture2D ResizeTexture(Texture2D toResize, Vector2 targetSize)
{
RenderTarget2D renderTarget = new RenderTarget2D(
GraphicsDevice, (int)targetSize.X, (int)targetSize.Y);
Rectangle destinationRectangle = new Rectangle(
0, 0, (int)targetSize.X, (int)targetSize.Y);
GraphicsDevice.SetRenderTarget(renderTarget);
GraphicsDevice.Clear(Color.Transparent);
SpriteBatch.Begin();
SpriteBatch.Draw(toResize, destinationRectangle, Color.White);
SpriteBatch.End();
GraphicsDevice.SetRenderTarget(null);
return renderTarget;
}
This works in that the texture gets resized but from memory usage it looks like the Texture "texLoaded" doesn't get freed. When using the uncommented Dispose method the SpriteBatch.End() will throw a disposed exception.
Any other way to load the texture resized for less memory usage?
Your code is almost ok. There's a minor bug in it.
You'll probably notice that it only throws that exception the second time that you call LoadResized for any given texture. This is because ContentManager keeps an internal cache of content that it loads - it "owns" everything that it loads. That way, if you load something twice, it just gives you back the cached object. By calling Dispose you are disposing the object in its cache!
The solution, then, is to not use ContentManager to load your content - at least not the default implementation. You can inherit your own class from ContentManager that does not cache items, like so (code is based on this blog post):
class FreshLoadContentManager : ContentManager
{
public FreshLoadContentManager(IServiceProvider s) : base(s) { }
public override T Load<T>(string assetName)
{
return ReadAsset<T>(assetName, (d) => { });
}
}
Pass in Game.Services to create one. Don't forget to set the RootDirectory property.
Then use this derived content manager to load your content. You now can safely (and now should!) Dispose of all content that you load from it yourself.
You may also wish to attach an event handler to the RenderTarget2D.ContentLost event, so that, in the event the graphics device is "lost", the resized texture gets recreated.

Resources