Stroke width, or line material in three-globe - three.js

Just trying to up the stroke width a little on the country polygons for three-globe.
There doesn't appear to be a helper function for this material or any settings beyond color.
I had the bright idea of looping through all the children of the globe object, very crude but:
for (let i in Globe.children[0].children[4].children){
const child = Globe.children[0].children[4].children[i];
child.children[1].material.linewidth = 3;
child.children[1].material.color = new THREE.Color('rgba(255,255,255,1)');
}
This appears to have no effect on the line width. It does, however, successfully change the color, so I think I'm close, though I really hope there's a better way than this.

I'm sorry to inform you that the .linewidth property is very poorly supported due to OpenGL limitations. You can see an explanation in the LineBasicMaterial.linewidth documentation
Due to limitations of the OpenGL Core Profile with the WebGL renderer on most platforms linewidth will always be 1 regardless of the set value.
You'll run into this issue if you're using THREE.Line or THREE.LineSegments. However, there is an alternative you could use with THREE.Line2, which circumvents the limitation by drawing lots of instanced gl.TRIANGLES instead of gl.LINE. You can see it in action in this example. In fact, there are 3 demos of fat lines, each one with a slightly different implementation. You would then have to substitute the outlines of the country with your own fat lines.

Related

Problem when assigning Input.mousePosition to transform.position in Unity2D

I have been following the Inventory tutorials for Unity by Kryzarel and have encountered a weird issue that I think may be from something unrelated.
Tons of googling has yielded no results. It seems like an obscure issue.
https://www.youtube.com/channel/UCOM0GGMEcu-gyf4F1mT7A8Q/videos for reference of the channel.
But the issue I'm running into is I do the following:
draggableItem.transform.position = Input.mousePosition;
So basically draggable Item is a reference to an Image component on a GameObject. I log Input.mousePosition before hand and the values make sense (within the hundreds e.g. (563,262,0)). However, the transform position is nowhere near the number logged. For the example, I'm seeing (48660.31, 23917.95, -7889.887). There is no logic between the debug.log statement giving Input.mousePosition and the code assigning it to the transform. Anyone have any idea what I could possibly have configured wrong, or could be wrong?
I would expect the position to be (563,262,0) not the ridiculous number that it ends up being. I've tried localPosition instead of transform.position, and it sort of works. In that it's off by about 500 or 700 to the top-right of what I'm moving relative to the mouse, I want to avoid hacky solutions like subtracting some magic number if possible.
Edit: Some further background, other mouse clicks and mouse related things appear to work correctly. It's an orthographic camera, or the default for a unity2D project
Solution: IN my case I was able to set it per the accepted answer, I then had to modify position not localPosition and also had to zero out the z-value of the world point.
The mouse position is relative to your screen, not your world. You need to convert the screen space to world space with:
var pos = Camera.ScreenToWorldPoint(Input.mousePosition);

WebGLRenderTarget image aliased

After sorting out the issues in this question, I was finally getting image data from my render target. BUT, that image data does not seem to use anti-aliasing. (It also doesn't seem to have any alpha values where 0 < a < 255, but that may be a different issue).
I saw in this thread that anti-aliasing isn't available for render targets, but that was in 2011. Is that still the case? Do I need to employ post-process anti-aliasing if I want it for my render target?
This issue is present in both r76 (what I'm using from my previous question) and even the latest, r86.
Here's an example image, if it helps. The gray background is the image rendered to the main canvas, while the transparent background comes from the render target. You can really see the aliasing on the edges between the faces.
this answer is some late (3 years later) but, in this discussion is the solution: https://discourse.threejs.org/t/why-is-a-custom-fbo-not-antialiased/1329.
With the next release of three.js (R101), it’s possible to use a new type of render-target to solve this problem. WebGLMultisampleRenderTarget enables the support of multisampled renderbuffers. You can now perform “render-to-texture” and have an antialiasing render result. A post-process AA like FXAA is not necessary anymore.
Important: It’s required to use a WebGL 2 rendering context since multisampled renderbuffers is a WebGL 2 feature. -

Single OpenGL context, multiple views

I have a Windows app which can create several view windows which can render some models using OpenGL (3.2+). Each window can either render it's own independent object, or two (or more) windows can render the same object (but for example from different camera perspectives):
After reading various posts here on stackoverflow I decided to create a single OpenGL context (HGLRC), and for each window that I am rendering to (HDC) I switch with
wglMakeCurrent(targetWindowHDC, m_deviceContext)
As you can see in the screenshot, that principally seems to work fine (the window code is happening on the main thread, and for Rendering I have my own RenderThread to which all the OpenGL operations are limited to). For each of the windows I render to an FBO (which has MSAA support if the user activates it), which only gets updated in case something in the scene changes, otherwise it will just draw it to the window as is.
My question is now, what states do I have to set every time I switch to drawing to another window? And is my approach reasonable in terms of performance?
This is what I now set every time after I make the context current for another HDC:
glClearDepth( 1.0f );
glClearColor( color.r, color.g, color.b, 1.0f );
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0f, 1.0f);
glPointSize(3.0f);
glEnable(GL_BLEND);
glBlendFunc( srcBlend, dstBlend );
glPolygonMode( GL_FRONT_AND_BACK, targetType );
glEnable( GL_CULL_FACE );
glCullFace( GL_BACK );
glViewport( 0, 0, vp.width, vp.height );
These are basically all the settings that could be changed when the user sets up the render windows, so I need to be sure they are set correctly before rendering each window.
But is it really necessary to do all those calls? It means in the above example with 4 render windows I need to call those 4 times each frame. Is there a better way? Would it be more efficient with several GL contexts?
The absolute minimum set of state you need to track between windows is the viewport, clearing color, color and depth masks, depth test function and depth range; you've got those covered in your code snippet already, so you're good.
Most other OpenGL state should be set on-demand right before it's needed anyway (and also cleaned up when no longer needed). So I'd say setting blend modes, face culling and so on actually is superfluous in your snippet.
Using the same context for multiple windows makes sense if the kind of rendering is the same for all the windows. For example in a typical 3D modeler there's a "quad view". If those subviews are implemented using multiple windows, then reusing a single context makes sense.
I'm one of those guys who keeps reminding people, that there's no need to have a separate OpenGL context for each window. That doesn't mean doing this is a bad thing if it makes your life simpler.
If your concern is about multiple windows with largely different rendering settings, then using separate render contexts is sensible.
So how do you decide if to use multiple context or a single one. Well, that's easy:
If the windows are sharing much of the render code and conceptually show the same thing (the same scene from different vantage points, different objects that make use of the same texture and are rendered using the same code) then context reuse it is.
If the contents of the windows differ a lot, then multiple contexts.

camera.lookAt not called when THREE controls are being used

I am working on a program, that uses THREE.RollControls, when the user goes too far away from the center of the screen, they tend to get lost, so I am working on creating a function that reorients them, facing the center of the scene.
What I had intened to do was simply call the following:
camera.lookAt(scene.position)
However, this has no affect. From what I was reading on different stack overflow questions specifically this:
ThreeJS camera.lookAt() has no effect, is there something I'm doing wrong?
It seems like their solution was to do the camera position change using the controls, rather then changing the camera itself.
I do not believe there is any 'Target' in the Roll Controls, so I don't know how I can reset where the camera is looking at based on a THREE.Vector3() Is there a simple way to do this, or will I basically have to:
So far I have 'attempted' to do the follow:
- Calculate the difference of position of the camera with the position of the scene.
- Normalize this vector
- Subtract it from the direction forward of the camera
- use this vector in controls.forward.add(thisVector)
but this doesn't do at all what I want (probably because I have no idea what I'm doing)
Thank you in advance for your time!
Isaac
The same thing bugged me too about the RollControls but I took a different approach in solving the problem. Since the controls are in the example code (in r55) you can modify the controls, as they are not part of the core library. You can see my modifications at http://www.virtuality.gr/AGG/EaZD-WebGL/js/three.js/examples/js/controls/RollControls.js
I introduced a local variable called mouseLook because I could not use the this.mouseLook. I initialized it to false and I only make it true when there is a button press i.e. when navigating in the scene. That solved my problem.

Drawing an image with a color tint in MonoTouch

What do I need to read up on to tint the image I draw with a predefined color. Alternatively just adjust the alpha value? I've played around with BlendMode but I basically don't know what I'm doing. :)
Simplyfied code below
_backgroundImage = UIImage.FromFile ("whiteblock.png");
var ctx = UIGraphics.GetCurrentContext ();
// What do I do here to tint or adjust alpha of the image
ctx.DrawImage (rect, _backgroundImage.CGImage);
Thanks
AnkMannen
I recommend starting with the Core Image Filter reference docs (see also CIFilter).
(Note that depending on the version of the OS, not all filters may be available.)
You probably want to focus on the filters in the CICategoryColorAdjustment category.
In particular, the CITemperatureAndTint filter can adjust the tint of the image, as you ask. But, it is not straightforward to use.* There are other questions on StackOverflow describing it, like this one: Input parameters of CITemperatureAndTint (CIFilter)
Finally check out the MonoTouch docs for a code example with CITemperatureAndTint. (I believe the image attached to the examples given there are wrong, as they show a scaled image, not a tinted image.)
*It takes two parameters, each a 2D CIVector. I believe the first component of the vector is temperature and should be in the ballpark of (1k ... 30k). I believe the second component is wavelength, so in the ballpark of (380 ... 700). If someone knows better, correct me.

Resources