I'm new to Threejs and I have been using the EdgesHelper which achieves the look I want for now. But I have two questions...
What is the default edge width/how is it calculated?
Can the edge width be changed...
I have searched around and I'm pretty sure that due to some limitation (not of threejs of Windows I believe) that there is no simple method to change the thickness of the edges (?). Alot of the examples I found that have thicker edges would only work on a particular geometry (e.g. doesn't seem universal).
Perhaps I am wrong but I would have thought that this would be a very common requirement? Rather then spend hours/days/weeks trying to get what i want myself (which I'm not even sure I personally would be able to do), does anyone know of a way to have control over the edge thickness, an existing example or a library that someone has already done that can work on any shape (any imported obj for example)
Many thanks
Coming back to this as Wilt mentioned there are other threads on this. Basically you cannot change the thickness due to a limitation in ANGLE, there are some work around like the THREE.MeshLine (also mentioned in the link Wilt stated) but i found most work aroudns had some limitations for what I wanted
https://mattdesl.svbtle.com/drawing-lines-is-hard explains what is difficult to it in lines.
He also has a library called https://github.com/mattdesl/three-line-2d which should make lines easier to use.
Related
I'm trying to create a tube along a curve, which TubeGeometry handles well. However, I also want the rings around each point to vary in radius, thus making the tube vary in width. I can't find any way to do this; the most I've found is this solution from almost a decade ago, but it involves editing the build itself, which I can't really do in this project...let alone the fact that it's from a much older build!
Is there any way to do this? With TubeGeometry, ExtrudeGeometry, or any other solution?
I have InstancedBufferGeometry working in my scene. However, some of the instances are mirrors of the source, hence they have a negative scale to represent the geometry.
This flips the winding order of those instances and look wrong due to Back Face Culling (which I want to keep).
I'm fully aware of the limitations within this approach, but I was wondering if there was a way to tackle this that I may have not come across yet? Maybe some trick in the shader to specify which ones are front face and which are back face? I don't recall this being possible though...
Or should I be doing two separate loads? (Which will duplicate the draw calls)
I'm loading a lot of different geometries (which are all instanced) so trying to make sure I get the best performance possible.
Thanks!
Ant
[EDIT: Added a little more detail]
It would help if you provide an example. As far as I can understand your question, simple answer is - no, you can't do that.
As far as i'm aware, primitives are rejected before they get to the shader, meaning that it's not in your control. If you want to use negative scaling, and make sure that surfaces are still visible - enable rendering of both faces (Front and Back).
Alternatively, you might be okay with simply rotating objects and sticking to positive scale - if you have to have mirroring - you're out of luck here.
One more idea: have 2 instanced objects, one with normal geometry and one with mirrored, you can fix up normals in the mirrored geometry.
I have an idea for an app that takes a printed page with four squares in each corner and allows you to measure objects on the paper given at least two squares are visible. I want to be able to have a user take a picture from less than perfect angles and still have the objects be measured accurately.
I'm unable to figure out exactly how to find information on this subject due to my lack of knowledge in the area. I've been able to find examples of opencv code that does some interesting transforms and the like but I've yet to figure out what I'm asking in simpler terms.
Does anyone know of papers or mathematical concepts I can lookup to get further into this project?
I'm not quite sure how or who to ask other than people on this forum, sorry for the somewhat vague question.
What you describe is very reminiscent of augmented reality marker tracking. Maybe you can start by searching these words on a search engine of your choice.
A single marker, if done correctly, can be used to identify it without confusing it with other markers AND to determine how the surface is placed in 3D space in front of the camera.
But that's all very difficult and advanced stuff, I'd greatly advise to NOT try and implement something like this, it would take years of research... The only way you have is to use a ready-made open source library that outputs the data you need for your app.
It may even not exist. In that case you'll have to buy one. Given the niché of your problem that would be perfectly plausible.
Here I give you only the programming aspect and if you want you can find out about the mathematical aspect from those examples. Most of the functions you need can be done using OpenCV. Here are some examples in python:
To detect the printed paper, you can use cv2.findContours function. The most outer contour is possibly the paper, but you need to test on actual images. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
In case of sloping (not in perfect angle), you can find the angle by cv2.minAreaRect which return the angle of the contour you found above. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html (part 7b).
If you want to rotate the paper, use cv2.warpAffine. https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html
To detect the object in the paper, there are some methods. The easiest way is using the contours above. If the objects are in certain colors, you can detect it by using color filter. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html
I am currently trying to dive into the topic of WebGL shaders with THREE.js. I would appreciate if someone could give me some starting points for the following scenario:
I would like to create a fluid-like material, which either interacts with the users mouse or «flows» on it's on.
a little like this
http://cake23.de/turing-fluid.html
I would like to pass a background image to it, which serves as a starting point in terms of which colors are shown in the «liquid sauce» and where they are at the beginning. so to say: I define the initial image which is then transformed by a self initiated liquid flowing and also by the users interaction.
How I would proceed, with my limited knowledge:
I create a plane with the wanted image as a texture.
On top (between the image and the camera) I create a new mesh (plane too?) and this mesh has some custom vertex and fragment shaders applied.
Those shaders should somehow take the color from behind (from the image) and then move those vertices around following some physical rules...
I realize that the given example above has unminified code, but still it is so much, that I can't really break it down to simpler terms, which I fully understand. So I would really appreciate if someone could give me some simpler concepts which serve as a starting point for me.
more pages addressing things like this:
http://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
https://29a.ch/sandbox/2012/fluidwebgl/
https://haxiomic.github.io/GPU-Fluid-Experiments/html5/
Well, anyway thanks for every link or reference, for every basic concept or anything you'd like to share.
Cheers
Edit:
Getting a similar result (visually) like this image would be great:
I'm trying to accomplish a similar thing. I am being surfing the web a lot. Looking for any hint I can use. so far, my conclusions are:
Try to support yourself using three.js
The magic are really in the shaders, mostly in the fragments shaders it could be a good thing start understanding how to write them and how they work. This link is a good start. shader tutorial
understand the dynamic (natural/real)behavior of fluid could be valuable. (equations)
maybe, this can help you a bit too. Raindrop simulation
If you have found something more around that, let me know.
I found this shaders already created. Maybe, any of them can help you without forcing you to learn a plenty of stuff. splash shaders
good luck
I'm messing around with image manipulation, mostly using Python. I'm not too worried about performance right now, as I'm just doing this for fun. Thus far, I can load bitmaps, merge them (according to some function), and do some REALLY crude analysis (find the brightest/darkest points, that kind of thing).
I'd like to be able to take an image, generate a set of control points (which I can more or less do now), and then smudge the image, starting at a control point and moving in a particular direction. What I'm not sure of is the process of smudging itself. What's a good algorithm for this?
This question is pretty old but I've recently gotten interested in this very subject so maybe this might be helpful to someone. I implemented a 'smudge' brush using Imagick for PHP which is roughly based on the smudging technique described in this paper. If you want to inspect the code feel free to have a look at the project: Magickpaint
Try PythonMagick (ImageMagick library bindings for Python). If you can't find it on your distribution's repositories, get it here: http://www.imagemagick.org/download/python/
It has more effect functions than you can shake a stick at.
One method would be to apply a Gaussian blur (or some other type of blur) to each point in the region defined by your control points.
One method would be to create a grid that your control points moves and then use texture mapping techniques to map the image back onto the distorted grid.
I can vouch for a Gaussian Blur mentioned above, it is quite simple to implement and provides a fairly decent blur result.
James