3d modeling for data structures - data-structures

I'm looking for a 3D modeling/animation software. Honestly, I don't know if this is something achievable - but what I want to have is some kind of visual representation of various ideas.
Speaking in future tense: if I were to read about of the boot process of an OS, I would visualize the various data structures building up; and I can step through the process with a sliding bar or so. If I were to think about a complex data structure, I would have a 3D representation of various links and relations between them. Another would be a Git repository at work - how commits/trees/blobs are linked in space, and how they progress as time passes. And all of these would be interactive.
The reason why I want to do this is that it'd be very easy to explain the process. Not just to others, but also to self. I can revisit my model, and it'd be a quick brush up.
I'm sure there are no ready-to-use softwares for this. What I could think of are Flash, with action scripting, or Blender 3D (Python scripting?); or Synfig. Whatever it's, I've to learn up start; and I'm looking for suggestions as to which (even if not in my list) is the right one to choose.
Thanks

I've used Blender, but it requires a large upfront investment of time, especially to learn the UI. Blender is all about the hotkeys. Once you have them memorized, it's great. But getting there takes a while.
Alice might be worth a look. It looks easy to use and supports scripting.

There are many tools available for 3D modeling. I'm a fan of 3D Studio max. But there is Blender, Maya, and truespace.
You may want to take a look at the field of visualization to help with illustrating your message.

I suspect that packages such as 3D Studio Max and Blender are too powerful, in the sense that your relatively simple requirements will force you on too long a learning path. Try Googling for Data Structure Animations to get an idea of what others have used. Also, head over to Information Aesthetics, they recently featured a tool for visualising commits and checkouts to/from repositories and similar.
My favourite is nearly the Lego Designer, very good for 3D block animations, but so far I haven't figured out how to add text to the blocks.

Related

Figma or Sketch? which one is better for design

I get confused about using Figma and sketch so I want to know what is the difference and which is the best to use and why?
I am not a user of the sketch. it's been 3 years since I am using Figma as my design tool and it's improving day by day. Keep in mind design tools are not what you should be concerned about but the skill you have. you can even design on Paint even a few years before many big companies were using Photoshop as their design tool, in fact, some are still using it. So the conclusion is no matter the is you are using, you can design everything related to UI/UX. The boundary is your imagination

Tools/Techniques to use our ability to think spatially

What software/UI techniques can leverage our spatial memory? I think and remember in physical space, often the location of something is as important as it's content. For instance I keep an untidy desk, but I know where to find things, I use different parts of my (multiscreen) desktop for different windows/icons. I annotate books (with post its) and can remember facing page, top/bottom etc. In the good old days we used to file things so we could find them later, now we use search, but that doesn't really use our spatial abilities. Google maps etc are brilliant but they're only really being used for the real real world, what about our internal locations? How can we leverage the wet ware to best advantage.
EDIT -> I've thought about a code tool that would profile the running code and then build a visualisation with classes/methods scaled to match their use, with large/small motorways/footpaths between them. Spatial layout still escapes me though - UI at the top, DB at the bottom, but how do you position a class in 3D based on it's usage?
Slightly off topic since it's not code per say but I've built my own tools to translate some of out complicated XML config files into DOT format and run them through Graphviz so that I could visualise them. We were able to strip out lots of pointless stuff from them after just looking at them.
Wetware win :o)

How a marker-based augmented reality algorithm (like ARToolkit's one) works?

For my job i've been using a Java version of ARToolkit (NyARTookit). So far it proven good enough for our needs, but my boss is starting to want the framework ported in other platforms such as web (Flash, etc) and mobiles. While i suppose i could use other ports, i'm increasingly annoyed by not knowing how the kit works and beyond that, from some limitations. Later i'll also need to extend the kit's abilities to add stuff like interaction (virtual buttons on cards, etc), which as far as i've seen in NyARToolkit aren't supported.
So basically, i need to replace ARToolkit with a custom mark detector (and in case of NyARToolkit, try to get rid of JMF and use a better solution via JNI). However i don't know how these detectors work. I know about 3D graphics and i've built a nice framework around it, but i need to know how to build the underlying tech :-).
Does anyone know any sources about how to implement a marker-based augmented reality application from scratch? When searching in google i only find "applications" of AR, not the underlying algorithms :-/.
'From scratch' is a relative term. Truly doing it from scratch, without using any pre-existing vision code, would be very painful and you wouldn't do a better job of it than the entire computer vision community.
However, if you want to do AR with existing vision code, this is more reasonable. The essential sub-tasks are:
Find the markers in your image or video.
Make sure they are the ones you want.
Figure out how they are oriented relative to the camera.
The first task is keypoint localization. Techniques for this include SIFT keypoint detection, the Harris corner detector, and others. Some of these have open source implementations - i think OpenCV has the Harris corner detector in the function GoodFeaturesToTrack.
The second task is making region descriptors. Techniques for this include SIFT descriptors, HOG descriptors, and many many others. There should be an open-source implementation of one of these somewhere.
The third task is also done by keypoint localizers. Ideally you want an affine transformation, since this will tell you how the marker is sitting in 3-space. The Harris affine detector should work for this. For more details go here: http://en.wikipedia.org/wiki/Harris_affine_region_detector

When should I break into GUI/game development?

I am a hobbyist console C++ developer. I have worked with pointers, arrays, std::vectors, std::strings, classes, and several data structures, including stacks and binary trees. I have some experience in linear algebra and geometry, and know the basics of physics. I do NOT have experience with win32, QT, openGL, DX9, OGRE, etc. I am still learning about the more valuable parts of OOP, like polymorphism.
I started C++ as a first language, and do not have experience with other languages. I could probably work with C, but I'd need to get used to manipulating char*'s and regular arrays (and not initing variables).
My question is, with my experience, when should I break into the development of GUI applications/game applications? Do I need to ground myself more firmly in certain areas of math, become comfortable with win32, get used to SDK?
If this question is too subjective for you to comfortably give advice, then when did you break into GUI/game development, and what steps did you take to make yourself comfortable with it?
Editing this so it will get bumped. Does anyone else have any opinions?
Caveat: I am a very "learn-by-doing" type of person, so take this with a grain of salt.
Sounds like you know enough programming basics to jump into something more realistic, and have enough background to justify that realistic project being a game.
I'd recommend downloading Visual C# Express and Microsoft's XNA Game Studio 3.0.
XNA is a game framework that has a lot of stuff done for you (sound, sprites, 3D support, etc.) built on a professional-quality C# platform and it would be a good starting point. Create a new XNA project and play around. Get some stuff to appear on the screen, then learn to manipulate it with user input. If you are interested in 3D, make a 3D shape such as a triangle. Then, make it spin. Then, make it spin based on user input. Then, add other objects and collisions.
Surely, there will be things in the framework that you don't understand. Tackle them as they come - use Google and ask questions here until you do understand them. Take it one step at a time and you should be just fine.
I'd personally recommend you to start out with Win32; try creating a basic window & move on from that point. Try making a simple 2D game engine in which you are able to make a game like chess or so. This could also serve as project for which you could write an AI; which is another part of Game Development!
After you finish that, the next step should be 3D. You could use the engine you wrote before and modify it from 2D to 3D. Pick a 3D API; OpenGL or DirectX. Once you have a basic engine, start writing a game. Need extra functionality? Then add it to the engine!
Math-wise you should know what matrices are. Trigonometry can come in handy as well.
I wouldn't waste my time with Xna, it's just a hype. :P
It seems you already have gained the basic knowledge of a programming language to start game programming. I'm with you in building on what you have already gained, such as learning OOP, and practicing more with pointers. I recommend you move on and don't turn to learning another tool "programming language" to achieve your goals.
So if you are interested in game programming, I recommend you pick a C++ framework and work on it, you'll definitely learn more advanced programming by just using it.
I recommend Gosu. It's not full of advanced features, which can be an advantage, but it has a very clean design and uses C++ in an elegant and modern way. Which makes it very friendly especially for beginners.
Also HGE is another good 2D engine.
To sum up, dive into programming more by actually "doing it" with what you have now. That's how you'll progress, and you'll be amazed with the results. And when "doing it" don't get disturbed with other languages and tools you already know something similar to it, and at the same time when learning a tool that helps you to build on your current knowledge, in your case I mean the C++ engine, don't choose very complicated ones (IMO, like OpenGL, DirectX, Win32...etc) because you'll end up spending time on learning the tool not using it and there is a great chance you'll get frustrated. You can always learn the low level things later, and it will make a lot more sense then.
as this question is kind of subjective, because every programmer has a favourite library to start with, I will recommend SDL as it is simple, well structured, and very complete, there are a lot of tutorials out there to guide you step-by-step from making a simple window to complex 3D manipulation. Everything can be implemented with ease.
As a side note, if you want to start programming games, I would recommend, also, that you read some tuts or books about a Game basics (initialization, game loop, update cycles), so that you know how to put your knowledge to the good work.

Bumptop - What's the relations with physics

Along with all buzz talks about the wonderful Bumptop desktop environment, Im getting this question now. What is the relation with Physics and Bumptop techniques. Basically, I am interested in learning the techniques/algorithms followed in this desktop environment. For example,
Collision Detection -- is used when one icon is about to collide on other one.
Any other known techniques?
It probably uses a quite common rigid body dynamics simulator as used in (simple/older) computer games. If you want to play with one yourself, have a look at Open Dynamics Engine.
I'd say that it uses mechanics (the branch of physics that describes motion) to determine/calculate the outcome of object interaction.
I remember seeing a very early demo of this months ago - it looks very impressive!
Well, it looks as though it's using friction and velocity as well. The friction slows the animations down - if it didn't then things would just fly out of the way. Velocity is used to have things move at speed in certain directions.
More info here
BumpTop
Extensive use of physics effects like
bumping and tossing is applied to
documents when they interact, for a
more realistic experience.

Resources