Three.js transform controls too thin lines - three.js

I am trying to make transform controls gizmo lines to be more thick because it is difficult to grasp them with linewidth of 1. However, by changing the this.linewidth = 1 nothing happens. According to https://threejs.org/docs/#api/materials/LineBasicMaterial the linewidth parameter is not working in Windows. Any Help ?

Regarding Line Thickness:
Unless your browser allows you--and you opt-in--to draw in native OpenGL, browser in Windows will use an application layer called ANGLE. Because Direct3D doesn't support line width, neither does ANGLE, and so neither does three.js (again, on Windows in non-native GL mode). Here's a discussion on the root topic, unfortunately the accepted solution is to use a geometry shader, which is not supported by WebGL.
Solutions:
You've already discovered one: Use cylinders.
There are clever ways of creating quads from you line definitions and using shaders to manipulate them into appearing to be lines, but discussing that approach is currently out of my depth.
Finally, you can upgrade your version of three.js to r91+. "Fat" line support was added in r91, and you can find an example here.

After a while it was not so difficult to change the Lines into Cylinders. So
Below is the updated TransformControls.js
https://www.dropbox.com/s/y52ra0antdvo0ht/TransformControls.js?dl=0

Your answer is a good work around but the reason for this is because WebGL doesn't really support different line widths. From MDN web docs:
The webgl spec, based on the OpenGL ES 2.0/3.0 specs points out that the minimum and maximum width for a line is implementation defined. The maximum minimum width is allowed to be 1.0. The minimum maximum width is also allowed to be 1.0. Because of these implementation defined limits it is not recommended to use line widths other than 1.0 since there is no guarantee any user's browser will display any other width.
As of January 2017 most implementations of WebGL only support a minimum of 1 and a maximum of 1 as the technology they are based on has these same limits.
Source
This is the answer why you don't see a difference, and why creating "fat" lines in WebGL requires the creation of actual thicker geometry.

Related

Threejs edge helper

I'm new to Threejs and I have been using the EdgesHelper which achieves the look I want for now. But I have two questions...
What is the default edge width/how is it calculated?
Can the edge width be changed...
I have searched around and I'm pretty sure that due to some limitation (not of threejs of Windows I believe) that there is no simple method to change the thickness of the edges (?). Alot of the examples I found that have thicker edges would only work on a particular geometry (e.g. doesn't seem universal).
Perhaps I am wrong but I would have thought that this would be a very common requirement? Rather then spend hours/days/weeks trying to get what i want myself (which I'm not even sure I personally would be able to do), does anyone know of a way to have control over the edge thickness, an existing example or a library that someone has already done that can work on any shape (any imported obj for example)
Many thanks
Coming back to this as Wilt mentioned there are other threads on this. Basically you cannot change the thickness due to a limitation in ANGLE, there are some work around like the THREE.MeshLine (also mentioned in the link Wilt stated) but i found most work aroudns had some limitations for what I wanted
https://mattdesl.svbtle.com/drawing-lines-is-hard explains what is difficult to it in lines.
He also has a library called https://github.com/mattdesl/three-line-2d which should make lines easier to use.

I don't fully understand D2D1_FIGURE_BEGIN: why is it needed, what's the difference, and why does Microsoft's sample code mismatch types anyway?

I'm reading up on Direct2D before I migrate my GDI code to it, and I'm trying to figure out how paths work. I understand most of the work involved with geometries and geometry sinks, but there's one thing I don't understand: the D2D1_FIGURE_BEGIN type and its parameter to BeginFigure().
First, why is this value even needed? Why does a geometry need to know if it's filled or hollow ahead of time? I don't know nay other drawing API which cares about whether path objects are filled or not ahead of time; you just define the endpoints of the shapes and then call fill() or stroke() to draw your path, so how are geometries any different?
And if this parameter is necessary, how does choosing one value over the other affect the shapes I draw in?
Finally, if I understand the usage of this enumeration correctly, you're supposed to only use filled paths with FillGeometry() and hollow paths with DrawGeometry(). However, the hourglass example here and cited by several method documentation pages (like the BeginFigure() one) creates a filled figure and draws it with both DrawGeometry() and FillGeometry()! Is this undefined behavior? Does it have anything to do with the blue border around the gradient in the example picture, which I don't see anywhere in the code?
Thanks.
EDIT Okay I think I understand what's going on with the gradient's weird outline: the gradient is also transitioning alpha values, and the fill is overlapping the stroke because the stroke is centered on the line, and the fill is drawn after the stroke. That still doesn't explain why I can fill and stroke with a filled geometry, or what the difference between hollow and filled geometries are...
Also I just realized that hollow geometries are documented as not having bounds. Does this mean that hollow geometries are purely an optimization for stroke-only geometries and otherwise behave identically to a filled geometry?
If you want to better understand Direct2D's geometry system, I recommend studying the WPF geometry system. WPF, XPS, Direct2D, Silverlight, and the newer "XAML" frameworks all use the same building blocks (the same "language", if you will). I found it easier to understand the declarative object-oriented API in WPF, and after that it was a breeze to work with the imperative API in Direct2D. You can think of WPF's mutable geometry system as an implementation of the "builder" pattern from Java, where the build() method is behind the scenes (hidden from you) and spits out an immutable Direct2D geometry when it comes time to render things on-screen (WPF uses something called "MIL", which IIRC/AFAICT, Direct2D was forked from. They really are the same thing!) It is also straightforward to write code that converts between the two representations, e.g. walking a WPF PathGeometry and streaming it into a Direct2D geometry sink, and you can also use ID2D1PathGeometry::Stream and a custom ID2D1GeometrySink implementation to reconstitute a WPF PathGeometry.
(BTW this is not theoretical :) It's exactly what I do in Paint.NET 4.0+: I use a WPF-esque declarative, mutable object model that spits out immutable Direct2D geometries at render time. It works really well!)
Okay, anyway, to get directly to your specific question: BeginFigure() and D2D1_FIGURE_BEGIN map directly to the PathFigure.IsFilled property in WPF. In order to get an intuitive understanding of what effect this has, you can use something like KaXAML to play around with some geometries from WPF or Silverlight samples and see what the results look like. And the documentation is definitely better for WPF and Silverlight than for Direct2D.
Another key concept is that DrawGeometry is basically a helper method. You can accomplish the same thing by first widening your geometry with ID2D1Geometry::Widen and then using FillGeometry ("widening" seems like a misnomer to me, btw: in Photoshop or Illustrator you'd probably use a verb like "stroke"). That's not to say that either one always performs better/worse ... be sure to benchmark. I've seen it go both ways. The reason you can think of this as a helper method is dependent on the fact that the lowest level of the rasterization engine can only do one thing: fill a triangle. All other drawing "primitives" must be converted to triangle lists or strips (this is also why ID2D1Mesh is so fast: it bypasses all sorts of processing code!). Filling a geometry requires tessellation of its interior to a list of triangle strips which can then be filled by Direct3D. "Drawing" a geometry requires applying a stroke (width and/or style): even a simple 1-pixel wide straight line must be first converted to 2 filled triangles.
Oh, also, if you want to compute the "real" bounds of a geometry with hollow figures, use ID2D1Geometry::GetWidenedBounds with a strokeWidth of zero. This is a discrepancy between Direct2D and WPF that puzzles me. Geometry.Bounds (in WPF) is equivalent to ID2D1Geometry::GetWidenedBounds(0.0f).

SDL: Performance SPriG vs SDL_gfx

I need to draw a polygon with thick lines. Now I have two possibilities:
Draw them with the library SPriG, which provides line thickness.
Split up the polygon in all it lines and draw them as polygons with a modified thickness (like explained in this tutorial (1 tutorial on the page).) with the SDL_gfx library.
I'm not sure about the performance of SPriG. SDL_gfx will be the fastest I guess.
Did you ever tried this, or simply "do you know the quality of SPrig"?
Thanks
It looks like SPriG just draws a circle at each pixel along a line to give it thickness. For wide lines you're looking at quite a bit of overdraw.
I'd do it a bit differently. It may or may not be faster depending on how triangle rasterization compares to per-pixel circle overdraw.
Don't use one of them. Just make use use of OpenGL and call: glLineWidth(3.6f);

Photoshop blending mode to OpenGL ES without shaders

I need to imitate Photoshop blending modes ("multiply", "screen" etc.) in my OpenGL ES 1.1 code (without shaders).
There are some docs on how to do this with HLSL:
http://www.nathanm.com/photoshop-blending-math/ (archive)
http://mouaif.wordpress.com/2009/01/05/photoshop-math-with-glsl-shaders/
I need at least working Screen mode.
Are there any implementations on fixed pipeline I may look at?
Most photoshop blend-modes are based upon the Porter-Duff blendmodes.
These requires that all your images (textures, renderbuffer) are in premultiplied color-space. This is usually done by multiplying all pixel-values with the alpha-value before storing them in a texture. E.g. a full transparent pixel will look like black in non-premultiplied color space. If you're unfamiliar with this color-space spend an hour or two reading about it on the web. It's a neat and good concept and required for photoshop-like compositions.
Anyway - once you have your images in that format you can enable SCREEN using:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR)
The full MULTIPLY mode is not possible with the OpenGL|ES pipeline. If you only work with full opaque pixels you can fake it using:
glBlendFunc(GL_ZERO, GL_SRC_COLOR)
The results for transparent pixels either in your texture and your framebuffer will be wrong though.
you should try this:
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA)
This looks like multiplying to me on the iPhone / OpenGL ES
Your best place to start is to pick up a copy of the Red Book and read through the chapters on on materials and blending modes. It has a very comprehensive and clear explanation of how the 'classic' OpenGL blending functions work.
I have found that using this:
glDepthFun( GL_LEQUAL);
was all need to get a screen effect, at least it worked well on my project.
I am not sure why this works, but if someone knows please share.

Issues with using unsupported Win32 GDI Pens modes?

The MSDN documentation is (somewhat) clear about the following two facts about GDI Pens:
A Cosmetic pen (create via CreatePen or ExtCreatePen w/ PS_COSMETIC) must be 1 unit wide (well, <= 1, but let's not go there).
A Geometric (ExtCreatePen w/ PS_GEOMETRIC) pen must solid (PS_SOLID only, no PS_DASH, etc). They can, however, draw fatter lines. This is clearly documented in the link I put above as only a 9x restriction (I'm dumb). To my defense (bad) comments and (broken) logic in my code led me to believe otherwise. Some other googled articles must have been written concidering only Windows 9x.
Why can I voilate these rules and have GDI happily draw with these Pens?
I can create fat (width = 10, for example) cosmetic pens and dashed Geometric pens. Heck, I can create a fat, dashed geometric pen!
These Pens seem to work fine usually. The only problem I've seen is in Polyline when I pass very large arrays of points - it renders the lines very slowly. However, Polyline is acting strangely with large arrays in general - it justs acts differently with the bad pens. (my other polyline problems may be another question...)
Is it ever safe to use wide Cosmetic pens or wide Geometric with patterns?
In general you should adhere to the documented API, otherwise you risk relying on OS version specific behaviour.
The ExtCreatePen restrictions you describe (e.g, no PS_DASH with PS_GEOMETRIC) only apply to Win9x, not WinNT, so on NT/2000/XP your "fat, dashed geometric pen" shouldn't be a problem. Also note that Polyline has some limitations on Win9x.
If you want dashed lines, I'd suggest using PS_USERSTYLE so that you control the lengths of the dashes and gaps, rather than relying on whatever default PS_DASH gives you.

Resources