D3.js how it's done? - algorithm

I have found this somewhere over internet and I'm being curious how it's done that this lib is so smooth (even on my slow PC) and what's more important what algorithm had been used to create that 'sticky' effect. Maybe you know where can I learn about that kind of algorithms?
Thanks for help in advance :)

This visualisation uses a force layout. The algorithm is described in the documentation and you can of course have a look at the source code yourself. You can also have a look at the source of this particular visualisation to see what parameters etc were used, but a better way to get started would be to have a look at the documentation and examples on the D3 website.

Related

How do I get started capturing and saving Project Tango point cloud data as a mesh?

I am in a similar situation as caspertm was when asking this question: How do I export Point Cloud Data (Project Tango)?
I apologize that I cannot comment on other questions yet or I would have just done so on that question. I too was looking for the functionality the mapper app provided (specifically the capturing and saving of 3d environments) and have found through searching and reading that question that it is not available for the tablet. The answer provided to caspertm's question was to use the point cloud data sample code as a starting point and modify it to log the data to a file.
I am wondering if anyone would be willing to go into more detail about what needs to be modified to the point cloud sample (I am using the Java version) to save that data and retrieve it later on my computer so I can manipulate it in a program like blender or unity.
I am very new to the android developing process. I can read the sample point cloud java code and get a very basic understanding of what is going on, but I definitely have a lot of learning to do. I realize I am asking for a lot of help and don't expect any one person (or even several) to paint me the entire picture, but tips on things like: whether this data should be saved internally or externally, which java file requires the saving code, how to format the file to be readable in other 3d programs and how to see more than just the current snapshot of the point cloud would be greatly appreciated. If anyone could point me in the right direction of how to get the actual environment colors projected onto the cloud data, that would be amazing too, but any help or links for any of these requests would be greatly appreciated.
Thanks so much!
This answer addresses only computational geometry aspects - issues involved in getting the point cloud, phoning home with it, stuffing it in a file, etc are considered 'self evident' in order to more quickly go play with the math :-)
Nice shallow pretty answer - if you're scanning something where the point cloud represents an object with fair curvy or straight surface then the suggestions here will help -- https://blender.stackexchange.com/questions/7028/wrapping-a-mesh-around-point-cloud-with-cavities Please note that 'fair' is a loaded word.
The more detailed answer isn't pretty - and reality will have a way of handing you point clouds that make the preceding algorithms very irritated. If you are looking to take a random cloud of points (yes, I know its a meaningful cloud of points to you, but mathematicians make much of these details) and reconstruct a geometry from it, i.e. define the topology that relates those points in a meaningful way, you're talking about a very nasty problem. Check the internet for discussions of Delaunay Triangulation and Voronoi diagrams, which are the more traditional approaches to solving this issue. Sort of. Its pretty straightforward if you were scanning a model of a volcano. Assuming Tango could see it (I think probably not), scanning the Calder mobile at JFK would give pretty much anyone a drinking problem. The algorithms themselves assume a planar basis, and do not react well to fiddling with that assumption. Explaining this requires talking about manifolds, and reading between the lines in your question, I'm assuming you'd rather not have me go any further.
You should be able to find some open source implementations - if it builds and passes all of its unit tests, then you should be OK using it as a black box. If you have to reach inside, be careful. Those things bite :-)
I think I can partially answer the question:
In terms of saving the points, it should be fairly simple, you could have a file open and keep writing the points data into the file when the callback is being called. However, as Project Tango Developer website mentioned, the data provided from API is just the points, not mesh. That means after getting the points you will need to figure out your own way to construct indices.

How to do algorithm visualization?

I am looking for an algorithm visualization library/tool that is well documented and you can call from your source code.
I took a look at jhave - example of usage. And I liked it, it seems it has some documentation but I do not trust its future.
I found this article about Algorithm explorer it has a nice idea. It is implemented as a c++ api but I cannot find it anywere.
My main idea is that I want to do some unit tests for the brain.
So I construct various exercises and in future when I want to test my knowledge I redo them.
I found that images stick longer with me, so that is why I want to visualize algorithms in certain states. ( I might remember better a tricky case like what happens when data is sorted in reverse and I use quick sort if I view it.)
An ideal tool:
1. Has to integrate with any language.
2. Has to be well documented with a growing comunity and examples.
3. Be implemented on top of a capable rendering engine(ogre, xna).
Here is the place you need to visit: The Algorithm Visualization Portal!

How to best implement an OGC best practice

I'm having some difficulty wrapping my head around building an OGC compliant Earth Observation ordering service. I'm not asking for a step by step process but rather hope to spawn a high level discussion about what might be the best way to approach this task.
There is this best practice document on what i would like to accomplish:
Order Services for Earth Observation Products OGC 06-141r2
However, i'm not sure whether i should get the schema(xsd) files that are at the bottom of the PDF and generate stubs from them, or leverage geoNetwork in some way. I have no idea where to start. Has anyone any experience implementing any OGC standards, best practices or something similar? Where do i start?
I would suggest contacting the editor directly. I believe that there are reference implementations that you can take a look at. The editor is Daniele Marchionni .
Cheers

Does BiDi support need to extend to visualizations?

I'm in the process of writing a visualization library for a product I work on and I've been thinking about i18n and BiDi support. I haven't been able to find a good answer anywhere, and my Project Manager doesn't really know the answer either.
My question is this: how far should I take bi-directionality with visualizations? Should the entire visual be mirrored, or only the labels on the key/axes? What is expected in the Right-to-Left reading world?
Note
I'm specifically thinking of Gauges and Bar Charts right now... if that helps the discourse.
For Hebrew:
You should not flip bar and line charts. Keep the x axis values growing from left to right. You can and should localize the labels.
Gauges should also usually remain as is, keeping their "clockwise" feel, with localized labels.
And of course there are always exceptions so always consult with your client or end users.
The answer, as usual, is it depends. This is a very complex question for which there is no easy answer.
For starters, I would suggest reading Michael Kaplan's blog Sorting It All Out.
http://www.siao2.com/2010/02/02/9956547.aspx
I am no expert, but my understanding is that, in general, people who are reading RTL expect things to be mirrored more often than not.

Profiling visualization tools?

I need to display profiling information pulled from a deeply embedded CPU, presenting it in a way which other developers on my team will be able to act upon. The profiling data is a snapshot of a cycle counter at the entry and exit of every function, so we have a call graph annotated with sub-microsecond timing accuracy. I'd prefer not to just dump out function names and timing like gprof, I'm looking for something easier to understand and act upon.
Has anyone worked with a particularly good profiling tool (on any platform), which made it easy to identify areas of the code to drill into? I'm looking for an inspirational example to follow for how to display the call graph, but if there is good tool with an input format I can massage my data to I'll use it. I could use Windows, Linux, or MacOS X to run the visualization tool.
A profiling article on IBM DeveloperWorks led me to GraphViz, with a profiling example on their site. Barring another suggestion here, I'll use GraphViz and mimic their profiling example.
Another neat tool to visualize profiling data is the gprof2dot.py python script.
It can be used to visualize several different formats: "This is a Python script to convert the output from prof, gprof, oprofile, Shark, AQtime, and python profilers into a dot graph." This is what the output look like:
(source: googlecode.com)
I use Kprof
http://kprof.sourceforge.net/
it is old, but I never found a better tool to inspect the results from gprof.
How about "GTKWave"?
But you have to insert the probe in your code.
Valgrind does profiling (and more), and there are GUIs for visualization.
I suggest you drop gprof+graphviz for OProfileUI, unless you don't have a choice.
JetBrains dotTrace (has a trial demo you can play with). It organizes the call stacks and can easily find the trouble spots. Has a lot of filtering capabilities as well. Very easy to navigate and find what you're looking for.
IE 8b2 offers a simple display of the call tree for javascript that I believe is much more useful than the GraphViz chart.
The GraphViz chart is wonderful for visualizing the call tree but makes it very difficult to visualize timing issues (IMHO the more important data).
**Edit: I thought it is worth pointing out that all of the tools suggested use a grid based tree to visualize the call tree. This allows you to see the calling structure without downplaying the timing data as I believe you do with the GraphViz chart.*
You can use Senseo, a plugin for Eclipse. It shows you the performance, memory allocation, objects created, time spent, actual methods invoked, hover over method signatures or calls, call context tree, package explorer and more.
I've written a browser-based visualization tool, profile_eye, which operates on the output of gprof2dot.
gprof2dot is great at grokking many profiling-tool outputs, and does a great job at graph-element placement. The final rendering is a static graphic, which is often very cluttered.
Using d3.js it's possible to remove much of that clutter, through relative fading of unfocused elements, tooltips, and a fisheye distortion.
For comparison, see profile_eye's visualization of the canonical example used by gprof2dot.

Resources