Antialiased lines using jogl on Windows 7 look horrible - windows

My java application uses jogl to create surfaces with an overlaid grid. The lines of the grid are anti-aliased. However, on Windows the anti-aliasing is complete garbage, unless I add a small alpha value to the surface. Here are screen shots of the same scene on all three platforms:
Mac OS X:
Ubuntu 11.10:
Windows 7:
Windows 7 with alpha=0.01:
The basic procedure to create this is as follows:
gl.glPushAttrib( GL.GL_ALL_ATTRIB_BITS );
gl.glEnable(GL.GL_POLYGON_OFFSET_FILL);
gl.glPolygonOffset(1.0f, 1.0f);
drawSurface(gl);
gl.glDisable(GL.GL_POLYGON_OFFSET_FILL);
gl.glPopAttrib();
gl.glDisable( GL.GL_LIGHTING );
gl.glDepthFunc(GL.GL_LEQUAL);
float[] c = {0, 0, 0, 0.5f};
gl.glColor4fv(c, 0);//the alpha here gets overridden if one is specified for the surface (which is why the Windows-with-alpha produces a darker grid)
drawGrid(gl);
gl.glDepthFunc(GL.GL_LESS);
gl.glEnable( GL.GL_LIGHTING );
The drawSurface() method also does the following before creating the polygons for the surface (using GL_TRIANGLE_STRIP primitives):
gl.glPushAttrib(GL.GL_LIGHTING_BIT);
gl.glColorMaterial(GL.GL_FRONT_AND_BACK, GL.GL_AMBIENT_AND_DIFFUSE);
gl.glEnable(GL.GL_COLOR_MATERIAL);
The drawGrid() method sets up anti-aliasing like so:
gl.glEnable(GL.GL_BLEND);
gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
gl.glEnable(GL.GL_LINE_SMOOTH);
gl.glHint(GL.GL_LINE_SMOOTH_HINT, GL.GL_NICEST);
The grid itself is created using GL_LINE_STRIPs
I've read up on the OpenGL documentation regarding line anti-aliasing and also tried out the polygon offset example here.
As for hardware, I have dual boot IBM ThinkPad (64-bit quad core) with an integrated nVidia 3100M card and an Intel Core 2 Duo iMac which has an ATI Radeon. Since Ubuntu and Windows are running on the same hardware, I know it can't be a hardware issue.
I'm looking for suggestions for improving the anti-aliasing of the grid lines on Windows.

Turns out I had not tried (I thought I had, but I did not rebuild and test it correctly)
gl.glDepthMask( false );
Adding that in correctly did greatly improve the fragmentation of the anti-aliased lines of the grid.
That said, I'm still not 100% happy with the look of the lines, but, for now, this question is resolved. Here is a screen shot of the 'improved' grid lines:
While this is a great improvement, it's still not as good as Mac OS X or even Ubuntu.

Related

Why three.js pointcloud is not same rendering between mac and windows (Intel HD graphics 4000)?

I'm developing Huge PointCloud Viewer(over 10millions) using Three.js.
But, I got a strange result - not same rendering between mac and windows.
below figure is on Mac
On Mac, and next figureOn windows on Windows(7).
both uses Intel HD graphics 4000.
What's happens in Chrome Browser?
Addtionally informations: same situation are iPhoneSE, iPhoneX, iPad4, MacBook, MacBookAir, and MacBookPro. those machines display very sparsed point cloud(intel HD graphics series commonly)
But, only iMac(2017) displays Huge point cloud, successfully. It uses Radeon pro 555, not intel GPU.
I want to any message about info or/and err, but no error in "chrome_debug.log"
=== P.S. === below my code
if(data.point.position.length>0){
var geometry = new THREE.BufferGeometry();
geometry.addAttribute('position', new THREE.BufferAttribute(data.point.position, 3));
geometry.addAttribute('color', new THREE.BufferAttribute(data.point.color, 3));
var material = new THREE.PointsMaterial({
vertexColors: THREE.VertexColors,
size:0.8,
sizeAttenuation : true,
});
}
=== P.P.S. ===
For all
Try and Error, before I may find resolving it.
When pointMaterial.sizeAttenuation = false, FAR perspective view on Mac like on Windows. However, NEAR perspective view become sparse point cloud.
But if NEAR perspective, creating with pointMaterial.sizeAttenuation = true, I got better than before result.
Thanks a lot for your suggestion.
On both the working and the failing configuration, visit to http://webglreport.com/?v=1 and check the value of the OES_element_index_uint extension. My assumption is that this extension not supported on the failing machine/OS combination (driver support missing on MacOS).
This extension is required by three.js to render more than 64k vertices from a single BufferGeometry. On machines that don't support this extension, you'll need to split up your geometry.
Your problem is pixel density on Mac devices. Mac devices often have a pixel ratio of 2 or higher on retina displays. When you declare pointsMaterial.size, you're telling it how big each point is in pixels, so make sure you're taking window.devicePixelRatio into account when assigning their pixel size. Something like
pointsMaterial.size = mySize * window.devicePixelRatio;
should do the trick.
For Chrome users, disabling "Use hardware acceleration when available" could solve the sparse point cloud problem, but the fps may be very low.
I didn't dig into the further reason, if someone finds it, Pls let me know~

Using core text on Mac book pro / HD screen different text sizes

I've recently shifted to a MacBook Pro 15" and core text is behaving strangely.
I have an app that uses core text to draw an NSattributedstring with CTFrameDraw. It works as expected on the external 1080p monitor, if I drag the window across to the MacBook Pro screen then the font is displayed very tiny like it's changed from 10 point font to 5 point when painted. Likewise if I repaint the text on the MacBook Pro then it's still small.
I'd guess it's because the MacBook Pro has the high resolution screen, and the font is being rendered to the native pixel resolution. Could anyone point me to docs on how to handle this? I had a google around and came up empty.
Swift 3, Xcode 8.2.1 on OSX 10.12.2
tia
It was me. I had this bit of code
var c = context.ctm
c = c.inverted()
context.concatenate(c)
to set the transformation matrix back to the identity. It screws things up on the double pixel displays.

Three.js MeshlambertMaterial example not visible/does not work in one computer

I have two computers
Windows XP 32 + chrome last version
Windows 7 home Premium 64 bits + chrome last version
The meshlambertmaterial example is not showing at PC 2?
I have discover that there is some problem related with the lights or the emissive color ( initially black, and screen black = nothing is viewed.) I can see the 3D object if I choose other color but the result is poor because the light is not taken into acount. The behaviour is like meshbasicmaterial.
The phong material, depht and others works as expected.
I promise I'm using the web example http://threejs.org/docs/#Reference/Materials/MeshLambertMaterial
Same problem with Firefox last v.
Any idea what is happening? It is related with my graphic card? Windows 7
Other materials (phong) are viewed OK.
Any tool to check what is happening?
UPDATED
The problem could be related with three.js release.
This example uses three.js r60 :
http://www.lostmarble.com/misc/experiments/learning-threejs-master/chapter-04/06-mesh-lambert-material.html
This example works fine on my 'problematic' second computer.
However, if I change the src to three.js r71, the box is black ?
The example uses ambient white color but this parameter does not exist in r71
Any idea Westlangley? (I know that this is strange but .... is a real problem)
I can see that the second example is opened by file. If opened from the link you posted that holds the example is the same problem taking place?
Also the background of the file is black so the setClearColor attribute of the renderer might be set wrong. The black box can be related with the light.Can you post some code?
I would advise you always to use the latest revision of three js if possible.

Matlab GUI Compatibility Between Mac and Windows (Display)

For some time now, I've been working on a series of GUIs. I use a Mac running OSX to write all of my code, and the problem I've encountered is that it there are deviations in appearance when the GUIs are used in windows, some of which are minor, and some of which are very significant.
1) The text in the windows version is substantially larger overall. This results in some of my button titles simply going off the button, or panel titles moving beyond the panel.
2) Axes appear to be different dimensions between Mac and Windows. i.e. An axis that appears square on my Mac will appear elongated or rectangular on windows, and vice versa.
3) Graphical displays are different. This is the real problem. Some of my GUIs use axes to display text and model chemical reaction animations. On the Mac, they look perfectly fine, but on the windows system, the sizing is completely off.
I've set all "Units" to "characters" as suggested by the Mathworks help page, and I do not specify any fonts to allow each system to use its default. I have however, specified font sizes, but apparently, 12 point font on windows appears very different from 12 point font on mac.
Are there any ways around these problems? I thought setting a specified font size and allowing for use of default fonts would fix this, but it hasn't, and I'm a little dry for ideas at this point.
Try working in 'pixels' or absolute size units instead of 'characters', and apply a scaling factor to your font sizes.
Setting 'Units' to 'characters' is probably the wrong way to go for portability, and could be the main cause of your display sizing issues. Which specific Matlab page recommended that you do so? Was it talking about cross-platform portability? The characters unit is very convenient to work with, but it is tied to the font metrics for the default system font. (See the doco for Units property at http://www.mathworks.com/help/matlab/ref/axes_props.html). That's going to differ between different operating systems. Working with 'pixels' or inches/centimeters/points, which are absolute, will probably give you more uniform results across operating systems.
And you're not wrong: OS X tends to display fonts of a given size on screen smaller than Windows does. (Generally; YMMV depending on your display DPI and system settings and other things.) For example, I run my terminals and text editors at 10 or 12 points in Windows, but 14 point or larger on Mac. So apply a scaling factor to the font sizes you set in your GUI. Figure out what looks good on Mac, and then scale it in your code to something like windows_font_size = floor(mac_font_size * 0.8) and see how it goes.
If you want to be more precise in scaling, you could grab the ScreenPixelsPerInch and ScreenSize root properties with get(0,...). You may also be able to call down in to Java code to get precise font metrics info to help with font scaling choices.
Either way, you're going to have to test your code on both systems instead of just expecting it to work portably. If you don't have ready access to a Windows development system, consider setting up a Windows VM on your Mac. With file sharing between the two sides, you'll be able to try your code out on both platforms right as you work with it.
I encountered this problem as well.
Calling this function within the FUNCTIONNAME_OpeningFcn might alleviate your issues:
function decreaseFontSizesIfReq(handles)
% make all fonts smaller on a non-mac-osx computer
persistent fontSizeDecreased
fontSizeDecreased = [];
if ~ismac()
% No MAC OSX detected; decrease font sizes
if isempty(fontSizeDecreased)
for afield = fieldnames(handles)'
afield = afield{1}; %#ok<FXSET>
try %#ok<TRYNC>
set(handles.(afield),'FontSize',get(handles.(afield),'FontSize')*0.75); % decrease font size
end
end
fontSizeDecreased=1; % do not perform this step again.
end
end

How can I capture the screen with Haskell on Mac OS X?

How can I capture the screen with Haskell on Mac OS X?
I've read Screen capture in Haskell?. But I'm working on a Mac Mini. So, the Windows solution is not applicable and the GTK solution does not work because it only captures a black screen. GTK in Macs only captures black screens.
How can I capture the screen with … and OpenGL?
Only with some luck. OpenGL is primarily a drawing API and the contents of the main framebuffer are undefined unless it's drawn to by OpenGL functions themself. That OpenGL could be abused was due to the way graphics system did manage their on-screen windows' framebuffers: After a window without predefined background color/brush was created, its initial framebuffer content was simply everything that was on the screen right before the window's creation. If a OpenGL context is created on top of this, the framebuffer could be read out using glReadPixels, that way creating a screenshot.
Today window compositing has become the norm which makes abusing OpenGL for taking screenshots almost impossible. With compositing each window has its own off-screen framebuffer and the screen's contents are composited only at the end. If you used that method outlined above, which relies on uninitialized memory containing the desired content, on a compositing window system, the results will vary wildly, between solid clear color, over wildly distorted junk fragments, to data noise.
Since taking a screenshot reliably must take into account a lot of idiosyncrasy of the system this is to happen on, it's virtually impossible to write a truly portable screenshot program.
And OpenGL is definitely the wrong tool for it, no matter that people (including myself) were able to abuse it for such in the past.
I programmed this C code to capture the screen of Macs and to show it in an OpenGL window through the function glDrawPixels:
opengl-capture.c
http://pastebin.com/pMH2rDNH
Coding the FFI for Haskell is quite trivial. I'll do it soon.
This might be useful to find the solution in C:
NeHe Productions - Using gluUnProject
http://nehe.gamedev.net/article/using_gluunproject/16013/
Apple Mailing Lists - Re: Screen snapshot example code posted
http://lists.apple.com/archives/cocoa-dev/2005/Aug/msg00901.html
Compiling OpenGL programs on Windows, Linux and OS X
http://goanna.cs.rmit.edu.au/~gl/teaching/Interactive3D/2012/compiling.html
Grab Mac OS Screen using GL_RGB format

Resources