Is there any way to draw a shape dynamically so that it appears to be drawn at runtime. Something like the following.
Related
I have an image that is generated as an NSImage, and then I draw it into my NSView subclass, using a custom draw() method.
I want to modify this custom view so that the image is drawn in the same place, but it fades out on the side. That is, it's drawn as a linear gradient, from alpha=1.0 to alpha=0.0.
My best guess is one of the draw() variants with NSCompositingOperation might help me do what I want, but I'm having trouble understanding how they could do this. I'm not a graphics expert, and the NSImage and NSCompositingOperation docs seem to be using different terminology.
The quick version: pretty much this question but on macOS instead of Android.
You're on the right track. You'll want to use NSCompositingOperationDestinationOut to achieve this. That will effectively punch out the destination based on the alpha of the source being drawn. So if you first draw your image and then draw a gradient from alpha 0.0 to 1.0 with the .destinationOut operation on top, you'll end up with your image with alpha 1.0 to 0.0
Because that punch out happens to whatever is already in the backing store where the gradient is being drawn to, you'll want to be careful where/how you use it.
If you want to do this all within the drawing of the view, you should do the drawing of your image and the punch out gradient within a transparency layer using CGContextBeginTransparencyLayer and CGContextEndTransparencyLayer to prevent punching out anything else.
You could also first create a new NSImage to represent this faded variant, either using the drawingHandler constructor or NSCustomImageRep and doing the same image & gradient drawing within there (without worrying about transparency layers). And then draw that image into your view, or simply use that image with an NSImageView.
I need to add this classic effect which consist in highlighting a 3D model by stroking the outlines, just like this for example (without the transparent gradiant, just a solid stroke) :
I found a way to do this here which seems pretty simple and easy to implement. The guy is playing with the stencil buffer to compute the model shape, then he's drawing the model using wireframes and the thickness of the lines is doing the job.
This is my problem, the wireframes. I'm using OpenGL ES 2.0, which means I can't use glPolygonMode to change the render mode to GL_LINE.
And I'm stuck here, I can't find any simple alternative way to do it, the most relevant solution i found for the moment is to implement the wireframe rendering myself, which is clearly not the easiest solution. To draw my objects I'm using glDrawElements with GL_TRIANGLES as primitive, I tried to use GL_TRIANGLE_STRIP as primitive but the result is definetely not the right one.
Any idea/trick to bypass the lack of glPolygonMode with OpenGL ES? Thanks in advance.
Drawing Outline or border for a Model in OpenGL ES 2 is not straight forward as the example you have mentioned.
Method 1:
The easiest way is to do it in multiple passes.
Step 1 (Shape Pass): Render only the object and draw it in black using the same camera settings. And draw all other pixels with different color.
Step 2 (Render Pass): This is the usual Render pass, where you actually draw the objects in real color. This every time you a fragment, you have to test the color at the same pixel on the ShapePass image to see if any of the nearby 8 pixels are different in color. If all nearby pixels are of same color, then the fragment does not represent a border, else add some color to draw the border.
Method 2: There are other techniques that can give you similar effects in a single pass. You can draw the same object twice, first time slightly scaled up with a single color, and then with real color.
I am visualizing a graph using Three.js and for each node of the graph I add a label using TextGeometry. It is a pretty small graph but when I add text my application gets really slow. What should I do about it?
TextGeometry is more suitable for cases when you are really interested in rendering the text in 3D. It will create complex geometry that will surely slow your app down specially when there is a lot of text or you use CanvasRenderer.
For labels, it is generally better to use 2D labels, which are way faster to render. There are many different approaches to this. These can go on top of the Three.js rendering canvas, on a separate canvas, or even normal HTML nodes positioned using CSS properties. Alternatively, you can dynamically create small canvases of your label texts, and use them as sprite textures always facing camera - this might be the easiest way as the labels would be part of the 3D scene as your other objects. For a separate layer approach, you need to use unprojectVector or such to figure out screen XY coordinates to match your 3D scene positions.
See these SO posts for example:
- Dynamically create 2D text in three.js
- Canvas and SpriteMaterial
- How do I add a tag/label to appear on top of several objects so that the tag always faces the camera when the user clicks the object?
I have started learning CANVAS. After i started drawing some basic shapes, i wanted to make some modifications to them. For example, I am confused of how to modify length and width of rectangle. Should i have to clear the canvas and redraw or can i capture the object of that rectangle like the objects in java script.
The canvas is a raster graphics surface. modifying length and width of a rectangle is a vector action. It is possible to scale a raster, but losses in quality can/will occur. You can use vector graphics in the form of SVG. But if it is only a rectangle, use a div with a border overlay-ed on your canvas.
WebGl doesn't support line thickness. So when I need to highlight some line, I just draw rectangle around it. But when I zoom scene it looks pretty scary.
There are two ways I see now:
1) Recalculate rectangle width according to canvas.width into model coordinates.
2) Place all zoom-invariant objects under separate matrix (I use scenejs) and recalculate their positions after each mousewheel
I don't like both of this solution. So I wonder: is there good workaround to make items zoom invariant?
Another way around that (not the most efficient one though) might be to use shaders. In our WebGL app, we render highlighted primitives into a texture and then blur it back on screen to add a "selection glow effect".