I would like to design a GUI for a multi-touch device to navigate through a feed of articles. The articles are tagged and organized in a few hierarchies (e.g. topic hierarchy, GEO hierarchy if the articles come from different locations, etc.)
The purpose of the GUI is to navigate quickly through the tags and hierarchies and find interesting articles.
I would like to build a tree map, so that each tile represents either a hierarchy or a tag. The tile displays its hierarchy/tag name and a "pile" of articles. The "pile" actually displays only the preview of the top article.
User can zoom the entire tree map to see more elements of the hierarchy and enlarge the previews of the articles. User can also select a tile (or article) and zoom it separately.
Does it make sense ?
It makes sense for me. Some questions come to my mind which should be considered early with design:
Alignment
Are your tiles hierarchical? (i.e. the tile "programming" has sub-tiles "java", "c++", "python", ...) In this case it makes perfectly sense to use a tree map. But you have to keep in mind how you will arrange tiles and also previews. It is quite hard to find enough space if you want to label your different tiles. Unfortunately text is in most cases much wider than high. And therefore you soon come to something which looks more like a flat tree than actually a map.
Hierarchy
If you have previews on different levels of your hierarchy make a clear distinction on which level they are place. Either by color-coding them or by different sizes.
Readability
If you have a deep hierarchy you may not see much detail in the separate panels any more. Therefore consider a very basic "preview" and add detail when you zoom in. Or add new levels only when zoom factor is good enough to display them in a readable manner.
Heap effect
If you have much panels on the same level it may easily become crowded. Use the tree-map configuration to scale you parent-tile according to the number of child-items.
Although this is not an answer to your question it may help you in your design decision. I would be happy to hear of your further steps.
Related
I'm considering implementing my own (toy) MVC framework, mainly just for practise and to have fun with it. I worked with such frameworks in the past but when thinking about how I would go about it a couple of questions arose.
So what puzzles me the most is how I should tackle the drawing of the visual elements. My idea was to implement each item's drawing logic in the item's class, organize them into a tree structure, like in WPF and and pass down some sort of canvas that the elements can draw on when traverse the tree.
I'm having doubts though, whether I should pass a canvas down an entire visual tree. Another interesting thing is the handling of overlaping elements and which to draw first. I thought the visual tree would take care of that by drawing elemtns in the order they appear in a depth first search. But then I thought that the newest element should be on top no matter how close it is to the root in the tree.
So basically I couldn't really find anything on implementation best practises or details when it comes to drawing the elements and I could use some friendly advice on this or if you could point to some material that covers this it would be more than welcome.
The MVC pattern typically doesn't tackle such granular details. It ultimately comes down to decomposing the problem into three broad domains: data and logic, user input, and user output.
I'm having doubts though, whether I should pass a canvas down an
entire visual tree.
How come? From a performance or coupling/responsibility perspective?
If it's performance, this is a very solid start. You do have to descend down the tree and redraw everything by default, but that can be mitigated by turning your hierarchy into an accelerator and keeping track of which portions of the screen/canvas/image need to be redrawn ("dirty regions"). Only descend down the branches that overlap this dirty region.
For the dirty regions, you can break up your canvas into a grid. As widgets need updating, mark the region(s) they occupy as needing to be redrawn. Only redraw widgets occupying those grid cells which are marked as needing to be redrawn. If you want to get really elaborate and mitigate overdraw to a minimum, you can use a quad-tree (but typically overkill for all but the most dynamic kind of systems with elaborate animating content and things like that).
It might be tempting to make this problem easier to solve to double-buffer everything and have children draw into their parents' canvases, but that's a route to gain some immediate performance in exchange for a large performance barrier at a design-level in the form of memory consumption and cache efficiency. I don't recommend this approach: double-buffer the window contents to avoid flickery artifacts, but not every single control inside of it.
If it's about coupling and responsibilities, often it's overkill from a UI context to try to decouple the rendering of a widget from the widget itself. Decoupling rendering from entities is common in game architectures through entity-component systems which would provide rendering components (typically in the form of dumb data) and defer the rendering functionality to systems, but those take a great deal of work upfront to implement for tremendous flexibility which you might never need in this kind of context.
Another interesting thing is the handling of overlaping elements and
which to draw first. I thought the visual tree would take care of that
by drawing elemtns in the order they appear in a depth first search.
But then I thought that the newest element should be on top no matter
how close it is to the root in the tree.
The tree doesn't have to be this rigid thing. You can send siblings to the front of a child list or to the back to affect drawing order. Typically z-order changes don't occur so frequently and most of the time you'd be better off this way than transferring a great overhead to sorting the drawing on the fly as you are rendering.
Mostly I just recommend keeping it simple, especially if this is your first attempt constructing a general-purpose MVC framework. You're far more likely to err on the side of making things too complicated and painting yourself in a corner. Simple designs are pliable designs.
I am new to d3 geo. My task is to make a map of Boston and add some interactive features to it.
So far I've been able to get an outline of Boston. But the base map should be comparable to something you'd see in Google Maps - it should have buildings, roads, street names and city names, rivers, etc. A basic geography that makes the region more familiar.
For now, I don't need to pan, and may have just two or three zoom states.
All the visualizations I've seen that overlay interactive features onto maps like this seem to use images for the underlying maps: windhistory, polymaps, google maps and more. So I guess my questions are:
Why do some maps use images for the "backdrop"? Is it just the easiest way to build on top of existing maps? Is it more performant?
If I go with the images approach, are there any limitations to the features I can add? I'm hoping to do things like windmaps, animations, heatmaps, etc.
What are the copyright implications for using images? I imagine the answer to this is, "depends on which images I use," but are there some standard libraries that have no strings attached? For example I know if I use Google Maps, I have to display their logo, there's an API limit, etc. Are there any standard sources that are completely open?
Are there any examples where geography is added purely through TopoJSON?
Sorry if some of these seem obvious, but I am completely new to maps and just don't know the standard practices. Thanks for any help!
A quick take on your questions. Hopefully someone with more mapping experience can give you more detail:
Why do some maps use images for the "backdrop"?
File size and computation time, mostly. Drawing complete maps with buildings, roads, and topography requires a lot of data and a lot of time for the browser to render it. If your browser DOM gets too complicated, it can slow down all interactions even after the original drawing.
If I go with the images approach, are there any limitations to the features I can add?
There's a reason most interactive maps use multiple layers. The background images are best for the underlying "lay of the land" type imagery, anything you want to be interactive should be on top with SVG.
What are the copyright implications for using images?
If you're using someone's images, you have to follow their licence. You might want to look at the OpenStreetMap project.
Are there any examples where geography is added purely through TopoJSON?
I suppose that depends on what you mean by "geography"; Mike Bostock has generated topoJSON for a variety of features based on US Atlas data.
As for whether it makes sense: TopoJSON encodes paths/boundaries directly, and encodes regions as the area enclosed by a set of boundaries. You could use it to encode streets and rivers and even building outlines, but you're not saving any file size relative regular GeoJSON because those paths generally aren't duplicated the way that region boundaries are. Relative to using image tiles, any improvement in file size would be countered with increased processing time.
Disclaimer: I'm an absolute d3.js n00b. I'm beginning to become proficient in JavaScript, but d3.js is giving me a bit of a headache.
I've a project I want to do, quite similar to The Art of Asking. I have a lot of data to show - I have to go down to the parliamentary/congressional district level for a few countries, not just provinces or states - and trying to load all the SVG elements kills a few lower-end computers I've been testing the project on. I rather would like it if I didn't explode any computers. (Actually, I'm pretty sure not exploding computers is a requirement of my project.)
To work around this, I figure that doing a hide/show mechanism like The Art of Asking is the path of least resistance, but while I can figure out how to hide an element, I haven't a clue how to show it after it's been hidden. I think creating a map that acts like a collapsible tree would be similar to what I want to do, but I could also just be going in the wrong direction entirely.
I just read Scott Hansleman's post on Guided View Technology in comics
and I though that this would be awesome if implemented in other avenues (specifically in manga )
I mean reading right to left in itself can be a little weird to start with and this would lower the barrier to entry for new readers.
I was wondering if there was possibly an open source project out there in the wild or if not then possibly a means to get started with something like this as I am not an image processing guru. In particular I just really would need to figure out which lines are panels and where to slice into smaller pictures. Because comics all have their own prefs as far as line thickness I'm not sure if there is a simple way to do this that works across many different border thicknesses and styles. Language doesn't matter so much here, I'm really about dealing with concepts and patterns of attack.
You can start by looking at the Duda-Hart implementation of the Hough transform for lines.
http://en.wikipedia.org/wiki/Hough_transform
The Hough algorithm will yield equations for straight lines. From that you can find intersections, identify rectangles, etc.
You can also use a kernel-based corner detection to find T-, L-, and X-intersections.
http://en.wikipedia.org/wiki/Corner_detection
One difficulty is that some panels in comics won't have "hard" edges, or may have edges that are squiggly, circular/elliptical, French curvy, etc. You can find particular algorithms for particular problems, but it would be hard to generalize these algorithms in a set of rules and programmatic logic that will work for all (or even most) samples. It seems that a hallmark of a good comic could be considered to be the elegant and sometimes surprising panelization, "surprising" being a synonym for unpredictable. Although there are many methods to "segment" an image into different regions, this is still an active area of research.
But if you start with Hough lines you'll have a good start and learn a lot about image processing.
Scenario: I have SVG image that I can zoom-in and zoom-out. Depending on the zoom, I will display more/less details on the visible part.
The question is: should I take care of not displaying details on the parts that are not currently visible (out of the screen), or the rendering engine is smart enough to skip (clip) those parts before they are rendered?
Yes, browsers are usually clever enough to not render things outside the viewport area.
Note however that the browser still needs to traverse the entire document tree, so even things outside the viewport area can have an impact. It's usually enough to mark the non-interesting subtrees with display="none" to let the browser skip over them when traversing. On small documents that's usually not something that you need to worry about.
I guess clipping will always be applied to the current viewport. But you are probably changing the DOM by updating with the detail visibility changes and restricting that to the visible parts only can make a difference.
The easiest way to find this out is to measure, though. Make two prototypes, one with manual clipping, one without and look for differences in rendering speed in various renderers.