I had been importing point clouds into Meshlab without issues on my old PC, but new PC is acting weird. Looks like a display setting, but I can't find anything that changes the effect (see .jpg).
Cloud is relatively small, but contains big xyz numbers formated as quite long strings. Zooming around points are only seen in rows (like they are layered). Anyone have a clue why this might be happening?
PS: point cloud is .txt file of a tree, btw.
Related
I am trying trace lines in the following image. So my input image is like this:
So far, using deep learning, I am able to distinguish different shapes and I get following output and I have now type of shape and their position:
which seems to be pretty good to me. Also I am successful in finding lines between shapes like following, I want to use this as an evidence:
which detects more or less all the lines between each shape.
Question:
My aim is detect which shape/object is connected to which shape, and in which direction. For a while I am ignoring the directions (but any ideas to detect are welcome). But I am really stuck at how should I approach this problem of "tracing" lines.
At first I thought of an algorithm which "connects" two objects if we find enough "evidence", but in most images I have a tree like structure, there are three or more than three connections from one object (as between 4th and 5th level of input image), which makes it hard to go for that simple approach.
Did someone ever solve such a problem? I am just looking for an algorithm, programming language is not a problem but I am currently working in python. Any help would be really appreciated.
Im working on a 3d reconstruction project where i have trouble matching the features in order to proceed with the reconstruction. To be more specific when im matching feature of matlab's examples images i have a high correct to wrong matches ratio but when im matching features of my own photos taken by a phone camera i have almost only wrong matches. I 've tried tuning the threshold but the problem still remains. Any ideas/sugestions of what is going wrong?
The descriptor im using is the sift descriptor from the vlfeat toolbox
edit: here is a dropbox link with the original images, the detected salient/corner points and the matches.
I think your main problems here are significant difference in lighting between the images, and specular reflections off the plastic casing. You are also looking at the inside of the USB drive through the transparent plastic, which doesn't help.
What feature detectors/descriptors have you tried? I would start with SURF, and then I would try MSER. It is also possible to use multiple detectors and descriptors, but you should be careful to keep them separate. Of course, there are also lots of parameters for you to tune.
Another thing that may be helpful is to take higher-resolution images.
If you are trying to do 3D reconstruction, can you assume that the camera does not move much between the images? In that case, try using vision.PointTracker to track points from one frame into the other instead of matching them.
I'm new to Threejs and I have been using the EdgesHelper which achieves the look I want for now. But I have two questions...
What is the default edge width/how is it calculated?
Can the edge width be changed...
I have searched around and I'm pretty sure that due to some limitation (not of threejs of Windows I believe) that there is no simple method to change the thickness of the edges (?). Alot of the examples I found that have thicker edges would only work on a particular geometry (e.g. doesn't seem universal).
Perhaps I am wrong but I would have thought that this would be a very common requirement? Rather then spend hours/days/weeks trying to get what i want myself (which I'm not even sure I personally would be able to do), does anyone know of a way to have control over the edge thickness, an existing example or a library that someone has already done that can work on any shape (any imported obj for example)
Many thanks
Coming back to this as Wilt mentioned there are other threads on this. Basically you cannot change the thickness due to a limitation in ANGLE, there are some work around like the THREE.MeshLine (also mentioned in the link Wilt stated) but i found most work aroudns had some limitations for what I wanted
https://mattdesl.svbtle.com/drawing-lines-is-hard explains what is difficult to it in lines.
He also has a library called https://github.com/mattdesl/three-line-2d which should make lines easier to use.
So I'm making a game with my group on processing for a project and we all have different computers. The problem is we built the game on one computer, however at this point we have realized the the (1200,800) size we used does not work on our professors computer. Unfortunately we have hard coded thousands of values to fit on this resolution. Is there any way to make it fit on all computers?
From my own research I found you can use screen.width and screen.height in order to get the size of the screen, I set the game window to about half the screen size. However all the images I had loaded for background and stuff are 1200x800 So I am unsure how to go about modifying ALL of my pictures (backgrounds), and hard values.
Is there anyway to fix this without having to go manually change the 1000's of hard values? (Yes I am fully aware how bad it is I hard coded the numbers).
Any help would be greatly appreciated. As mentioned in title, the language is processing.
As I'm sure you have learned your lesson about hard-coding numbers, I won't say anything about it :)
You may have heard of embedding a processing PApplet inside a traditional java JFrame or similar. If you are okay with scaling the image that your PApplet draws (ie it draws it at the resolution that you've coded, and then the resulting image is scaled up or down to match the screen), then you could embed your papplet in a frame, capture the papplet's output to an image, scale the image, then draw it to the screen. A quick googling yielded this SO question. It may make your game look funny if the resolutions are too different, but this is a quick and dirty way. It's possible that you'll want to have this done in a separate thread, as suggested here.
Having said that, I do not recommend it. One of the best thing (IMO) of Processing is not having to mess directly with AWT/Swing. It's also a messy kludge and the "right thing to do" is just to go back and change the hard-coded numbers to variables. For your images, you can use PImage's resize(). You say your code is several hundred lines long, but in reality that isn't a huge amount-- the best thing to do is just to suck it up and be unhappy for a few hours. Good luck!
I'm new to d3, and just getting the hang of it. My requirement is pretty simple. I want the x-axis to be log scaled to a base of some decimal number. The default log scale has a base of 10. And scouring the reference API and the web hasnt yielded me a way of changing the base. Might be i'm missing something basic about d3. But i cant seem to get past this obstacle. Ideally, shouldnt there be a log.base() similar to the pow.exponent() for the power scale
d3.scale.log().base(2) seems to work fine. (As Adrien Be points out.)
There isn't such a function (although it wouldn't be too hard to add one). Your best bet is to write your own function that does the necessary log transformation you specify and then passes the result on to a normal linear scale to get the final value.