Custom antialiasing settings in three.js - three.js

I am trying to find a way to specify some antialiasing settings in three.js/WebGL to try and improve the results.
The thing is that with the exact same code, if I load a model on a Retina Display, the antialiasing works quite fine (even if I move it to my non-retina external monitor afterwards), but it's all pixelated if I load it first on a non-retina screen.
Here is a screenshot (both on Chrome, both displayed on a retina display). Left was loaded on a non-retina, right on a retina: https://i.imgur.com/krNavZU.png
What I get from this is that three.js somehow uses the pixel density when initializing the antialiasing. Is there anyway to tweak this so that I can force it to something better?
Thanks a lot in advance for your help :)
Side note: For the record, it seems that the antialiasing works much better on Firefox as well, anyone knows why?

Just in case someone is looking to do the same kind of tweaking I was trying, I'll answer my own question.
Based on WaclawJasper's comment, I found some documentation in Three.js related to my issue. From http://threejs.org/docs/#Reference/Renderers/WebGLRenderer:
.setPixelRatio ( value )
Sets device pixel ratio. This is usually used for HiDPI device to prevent blurring output canvas.
I was using renderer.setPixelRatio( window.devicePixelRatio ); in my initialization, which was the reason why the rendering was depending on where the page was first loaded.
Related to the antialiasing, I now artificially raise the pixel density using renderer.setPixelRatio(2); on non-retina screens. This results in a much more effective antialiasing, at the cost of increased computation.

Related

Xamarin Improve pinch scale

I have a problem. I followed this tutorial about touchmanipulations in skiasharp: https://learn.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/graphics/skiasharp/transforms/touch and it's working great. But for example scaling only works if both my fingers hit the bitmap. The problem is that if the bitmap is very small, I won't be able to scale it. How can I focus on the bitmap with my primary finger and use my secondary finger from anywhere on the canvas to Scale it?
My full code is in that page!
Any idea?
Had refered image apps like Google photos and others. My opinion is there can be two improvements in your image component.
Handle touch of the background view instead of the image view.
Restrict scaling lower than a minimum scale value. Very low scale values doesn't have a use case as the end user cannot view it.
I'm not sure of your use case implementing the second improvement is entirely up to your decision

Low resolution back ground appears blurry

I am trying to use a premade stage from an old game as my background. Its resolution is 4864 x 184. I've tried every setting in graphics options and any other related settings. I think its related to the texture page size but I can't find any information on how to change the size.
Thanks for any replies.
But it is not the best solution in the case. I would recommend to spilt the background into smaller pieces (two 2048*184 and one 768*184) because otherwise it may not work on some old hardware.

IKImageBrowserView on retina screen

Has anyone successfully used an IKImageBrowserView with a Retina Mac? What I get is that the image size is wildly misinterpreted. Previously I was using CGImage images which don't have a logical size, so it makes sense that the browser can't draw the at the right size. However, I've switched to NSImage, created using -initWithCGImage:size: and that still doesn't work right.
My images are 244x184 pixels and should be drawn at a logical size of 122x92. When passing 122x92 as the size, they are drawn way too large, at about 180 pixels wide. If I pass exactly half this, 61x46, the size is correct, but the image looks downscaled and not sharp. If I pass 122x92 and run with NSHighResolutionCapable set to NO in Info.plist, everything works well.
My conclusion is that IKImageBrowserView is not Retina compatible even with the 10.10 SDK on a Retina MacBook Pro running OS X 10.11. Or am I missing something? Any pointers would be appreciated!
I discovered that I wasn't really thinking the right way. The browser is supposed to always scale its images, so that's why the Retina-sized images ended up larger. I just subclassed the browser to be able to use a custom cell and customize the image frame per cell. There are however some subtle bugs in the browser that cause it to scale the images just a little bit in Retina mode, but I was able to work around that by creating a custom foreground layer for each cell that contains the image without scaling. Problem solved. Hopefully this will help someone else in the future.

WebGL vs CSS3D for large scatter plot of images

I am building a web application which will display a large number of image thumbnails as a 3D cloud and provide the ability to click on individual images to launch a large view. I have successfully done this in CSS3D using three.js by creating a THREE.CSS3DObject for each thumbnail and then append the thumbnail as an svg:image.
It works great for upto ~1200 thumbnails and then performance starts to drop off (very low FPS and long load time). By the time you hit 2500 thumbnails it is unusable. Ideally I want to work with over 10k thumbnails.
From what I can tell I would be able to achieve the same result by creating each thumbnail as a WebGL mesh with texture. I am a beginner with three.js though, so before I put in the effort I was hoping for guidance on whether I can expect performance to be better or am I just asking too much of 3D in the browser?
As far as rendering goes, CSS3 should be relatively okay for rendering quite big amount of "sprites". But 10k would probably be too much.
WebGL would probably be a better option though. You could also take care about further optimizations, storing thumbnails in atlas texture or such...
But rendering is just one part. Event handling can be serious bottleneck if not handled carefully.
I don't know how you're handling mouse clock event and transition towards fullsize image, but attaching event listener to each of 2.5k+ objects probably isn't a good choice anyway. With pure WebGL you could use imagespace for detecting clicked object. Encoding each tile with different id/color and using that to determine what's clicked. I imagine that WebGL/CSS3D combo could use this approach as well.
To answer question, WebGL should handle 10k fine. Maybe you'll need to think about some perf optimization if your rectangles are big and they take a significant amount on the screen, but there are ways around it if that problem appears.

Photoshop blending mode to OpenGL ES without shaders

I need to imitate Photoshop blending modes ("multiply", "screen" etc.) in my OpenGL ES 1.1 code (without shaders).
There are some docs on how to do this with HLSL:
http://www.nathanm.com/photoshop-blending-math/ (archive)
http://mouaif.wordpress.com/2009/01/05/photoshop-math-with-glsl-shaders/
I need at least working Screen mode.
Are there any implementations on fixed pipeline I may look at?
Most photoshop blend-modes are based upon the Porter-Duff blendmodes.
These requires that all your images (textures, renderbuffer) are in premultiplied color-space. This is usually done by multiplying all pixel-values with the alpha-value before storing them in a texture. E.g. a full transparent pixel will look like black in non-premultiplied color space. If you're unfamiliar with this color-space spend an hour or two reading about it on the web. It's a neat and good concept and required for photoshop-like compositions.
Anyway - once you have your images in that format you can enable SCREEN using:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR)
The full MULTIPLY mode is not possible with the OpenGL|ES pipeline. If you only work with full opaque pixels you can fake it using:
glBlendFunc(GL_ZERO, GL_SRC_COLOR)
The results for transparent pixels either in your texture and your framebuffer will be wrong though.
you should try this:
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA)
This looks like multiplying to me on the iPhone / OpenGL ES
Your best place to start is to pick up a copy of the Red Book and read through the chapters on on materials and blending modes. It has a very comprehensive and clear explanation of how the 'classic' OpenGL blending functions work.
I have found that using this:
glDepthFun( GL_LEQUAL);
was all need to get a screen effect, at least it worked well on my project.
I am not sure why this works, but if someone knows please share.

Resources