.Net Chart control with logarithmic scale on the X-axis - telerik

For a project we are using the Telerik RadChart control to display a graph on a website. At the moment the X-axis follows a normal scale but we would like to change that to a logarithmic scale. As far as we can tell the control does not allow that.
Does anyone know of an alternative that would support this?
TIA,
David
Example of what we would like to achieve.

"Unfortunately the current version of RadChart does not support logarithmic X-Axis. We have already logged such a request in our public issue tracking system, you can find it here, however, taking a decision for implementing a certain feature depends on multiple conditions including complexity, existing features impact, demand rate, etc. It is not yet in our immediate plans, still, I would encourage you to use the above link to vote and track the issue."
Regards,
Nikolay
the Telerik team
Taken from here, as it was posted this month.

Related

How to extend stepped line area until the edge of graph in Line and Stacked Chart Power BI

Goal :
Stepped cover the whole bar until the edge like below.
from google spreadsheet
Problem :
Cannot find a method or tricks to extend the stepped line until the edge.
Data :
Type YTD in TGT RF GAP OVER
A 80 90 20
B 110 105 10
How can we solve this dirty situations whatever some way to dig down with developer mode in visual studio code?
There is not much you can do. PowerBI do not include the source of their visuals so you cant dive in to the "code behind".
You may get some improvement adjusting the X-Axis settings (padding). See inage below. You can also look in the visuals in the marketplace which may have some options though I haven't seen one that looks suitable. Alternatively you can build your own custom visual - which requires a lot of time and effort.
My recommendation is to try and stick to the standard PowerBI visuals and capabilities. It is not a custom dev tool and if you try to do pixel perfect reports as per a designer spec or business requirement you are going to continually bump up in to a lot of complexity. If demand is for full customisation of the UI and/or UX, then consider a different approach using custom dev from the ground up. The expense of this is often prohibitive and, once explained, using the out of the box PowerBI features becomes a little more palatable.

Is opencv image similarity comparison reliable for objects? Is there any cost/benefit quality alternative to open-source API's?

I'm trying to choose an API to match object images taken with a cell phone with a list of images in a file system. The point is, I'm afraid that I won't get reliable results and it won't be worth it to loose time in this feature.
I would really appreciate some advice regarding this topic.

How to handle large numbers of pushpins in Bing Maps

I am using Bing Maps with Ajax and I have about 80,000 locations to drop pushpins into. The purpose of the feature is to allow a user to search for restaurants in Louisiana and click the pushpin to see the health inspection information.
Obviously it doesn't do much good to have 80,000 pins on the map at one time, but I am struggling to find the best solution to this problem. Another problem is that the distance between these locations is very small (All 80,000 are in Louisiana). I know I could use clustering to keep from cluttering the map, but it seems like that would still cause performance problems.
What I am currently trying to do is to simply not show any pins until a certain zoom level and then only show the pins within the current view. The way I am currently attempting to do that is by using the viewchangeend event to find the zoom level and the boundaries of the map and then querying the database (through a web service) for any points in that range.
It feels like I am going about this the wrong way. Is there a better way to manage this large amount of data? Would it be better to try to load all points initially and then have the data on hand without having to hit my web service every time the map moves. If so, how would I go about it?
I haven't been able to find answers to my questions, which usually means that I am asking the wrong questions. If anyone could help me figure out the right question it would be greatly appreciated.
Well, I've implemented a slightly different approach to this. It was just a fun exercise, but I'm displaying all my data (about 140.000 points) in Bing Maps using the HTML5 canvas.
I previously load all the data to the client. Then, I've optimized the drawing process so much that I've attached it to the "Viewchange" event (which fires all the time during the view change process).
I've blogged about this. You can check it here.
My example does not have interaction on it but could be easily done (should be a nice topic for a blog post). You would have thus to handle the events manually and search for the corresponding points yourself or, if the amount of points to draw and/or the zoom level was below some threshold, show regular pushpins.
Anyway, another option, if you're not restricted to Bing Maps, is to use the likes of Leaflet. It allows you to create a Canvas Layer which is a tile-based layer but rendered in client-side using HTML5 canvas. It opens a new range of possibilities. Check for example this map in GisCloud.
Yet another option, although more suitable to static data, is using a technique called UTFGrid. The lads that developed it can certainly explain it better than me, but it scales for as many points as you want with a fenomenal performance. It consists on having a tile layer with your info, and an accompanying json file with something like an "ascii-art" file describing the features on the tiles. Then, using a library called wax it provides complete mouse-over, mouse-click events on it, without any performance impact whatsoever.
I've also blogged about it.
I think clustering would be your best bet if you can get away with using it. You say that you tried using clustering but it still caused performance problems? I went to test it out with 80000 data points at the V7 Interactive SDK and it seems to perform fine. Test it out yourself by going to the link and change the line in the Load module - clustering tab:
TestDataGenerator.GenerateData(100,dataCallback);
to
TestDataGenerator.GenerateData(80000,dataCallback);
then hit the Run button. The performance seems acceptable to me with that many data points.

Is there an algorithm for positioning nodes on a link chart?

I'm a member of a small but fairly sociable online forum, and just for fun we've been plotting a chart of who's met who in real life. Here's what it looked like fairly recently.
(The colour is the "distance" from the currently-selected user, e.g., yellow is someone who's met someone who's met them. And no, I'm not Zak.) Apologies for the faded lines, they don't seem to have weathered the SO upload process very well.
It's generated as SVG, with a big block of JSON defining who's met who. The position (x,y) of each member on the chart is hard-coded into that JSON. Until now, it's been fairly easy to cope when someone meets someone else - at worst, maybe two or three people need to be shuffled around - but it does involve editing the co-ordinates manually. And now that the European and North American contingents are meeting up, and a few on the periphery are showing up at meets, all hell is breaking loose...
We can put some effort into making all the nodes draggable, which would make the job of re-arranging a bit less tiresome. But it seems more sensible to let the computer take care of positioning them, especially as the problem will only get harder with more members.
So, does anyone know of an algorithm for positioning these nodes on the chart, based on which other nodes they're linked with?
Ideally, it would
minimise or avoid long links
avoid having lines run underneath unrelated nodes
take account of the fact that well-connected nodes are bigger
do its best to show the wider "all these guys met each other" relationships (the big circle at the bottom is largely the result of one meet, for example, though the chart has no idea of when any two people met)
but if it gets us close enough to tweak it, that's progress.
And, what's the real name for these charts? I believe they're called "link charts", but I'm not getting good results from Google using that name or anything else I can think of.
We'll likely be implementing this in PHP or Javascript, but right now it's how to begin approaching the problem that's the bigger question.
Edit: Some great answers coming already. I would be very interested in the actual algorithm(s) used, though, as well as tools that do the job.
What you are looking for are f.e. force-based algorithms. There are quite a few libraries, and some have been named already, like prefuse, yWorks. Here a few more: jung, gvf, jGraph.
The real name for it is "graph". To generate graph, and have a good layout algorithm, the best is to use a software which will do the job.
I advise you to use Gephi.
This soft is able to do all the things you want to.
Have a look at the yWorks tools.
You can google for graph visualization. There are more libraries for this, including GraphViz, but probably not all your requirements will be met.
If you can deal w/ Java, take a look at prefuse.
Have a look at NodeXL
Also, this book may be relevant.

Techniques for visualising change over time in graphs

I'm looking to display a graph (network diagram, not a chart) and show its changes over time. Is there a standard or best way to do this, or any kind of 'network diff' tool?
I'm looking for an overview of the general layout decisions involved, i.e. a list of options and trade-offs to be made, and best-practice guidelines where these exist.
Wow. Not an easy question! I'm curious if anyone can come up with some authoritative resources for you.
I haven't found any standard or best practice documented anywhere from a design standpoint, nor do I know of any tool specifically designed for determining and displaying the changes, but I have some ideas.
First, a few technical notes. There's GraphML, which you can use (and extend) to represent your graph in a standard format, and there are some parsers available, and it works with Prefuse and probably other display libraries. It's just XML, though - nothing too special. Creating the "diff" by comparing two GraphML files should be pretty simple.
The really interesting part is how to communicate the differences to the user.
In all cases, you should have a visual indicator for nodes and edges that are added or removed. You may use color, showing existing nodes as something neutral, say gray, new nodes as green, and removed nodes as red. There are lots of options.
You might find this slideshow interesting.
It's probably obvious, but, over time, the nodes should not move more than necessary to adapt to the new state of the graph - the layout should evolve, not start from scratch for every state. This is crucial for comparing the states.
Side-by-side before/after comparison. Present before and after snapshots of the same graph side-by-side. If your graph is very large and complicated, a side-by-side layout may be impractical. You could try overlaying one graph over the other, though that is likely to be disorienting.
Side-by-side series comparison. AKA small multiples. Same as above but showing as many points in time as is useful. Even more restrictive than before-after in terms of how much space required, and difficult for.
Animate a single graph. I think the most intuitive method is to smoothly animate the graph changes, though a choppy slideshow could work if the changes between slides are not too drastic.
Showing details. If useful, you can spell out the change event details in a few different ways.
Show labels on the graph node (could be interactive if there are too many to show at once)
Show a list in a sidebar / legend. Nice if reading the progression of changes is useful, but harder to connect to the visual.
Show a timeline instead of a list. This shows the 'real' progression of events better than a simple list, which gives the impression that all the events are evenly spaced over time.
What you actually choose to do would depend largely on the nature of your dataset and your goals. A simple graph of a few dozen nodes and a few changes is a much different challenge than a huge network, like say every constellation in the night sky!
Here is an interesting study: http://publik.tuwien.ac.at/files/PubDat_198995.pdf
This paper presents a prototype, and user tests will be published soon in:
P. Federico, W. Aigner, S. Miksch, F. Windhager, M. Smuc:
"Vertigo zoom: combining relational and temporal perspectives on
dynamic networks";
accepted as talk for: 11th International Working Conference on
Advanced Visual Interfaces (AVI2012), Capri Island; 2012-05-21 -
2012-05-25; in: "Proceedings of the 11th International Working
Conference on Advanced Visual Interfaces (AVI2012)", ACM, (2012),
ISBN: 978-1-4503-1287-5.
http://ieg.ifs.tuwien.ac.at/~federico/pub.php
Your question is kind of general, I'm not clear exactly what kinds of analysis you are aiming for. The are several network analysis packages that have some dynamics capacity. Gephi is one. The networkDynamic and ndtv R packages provide tools for representing and visualizing dynamics as animations and static layouts (disclaimer: I'm a maintainer)

Resources