Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've been working in a company that's got it's own proprietary software and I've learnt rigging in that. I've gotten so used to rigging in that, that when I tried to go about doing some rigging in Maya, I found it was quite different. I've gone over quite a few tutorials online and I didn't find them very logical in their approach and have yet to find one that's logical and doesn't involve an unnecessary complicated method of approaching it. I've got the logic of how to rig a character, but I'm struggling with implementing it in maya. Does it make sense to learn maya from scratch? or could someone direct me in a better direction to learn rigging?
Does it make sense to learn maya from scratch?
Probably, everything you do in Maya ends up as nodes in node network. Every node network is a rig, just usually not called a rig. So everything you do in Maya is a rig of sorts, that is until you decide to wipe it off. You can't really appreciate the system unless you start form very basic stuff, because the beauty of the system starts form there.
So start of by making a poly cube, then open the riggers main tool hypergraph (or node editor in Maya 2012 good for small graphs). Observe that its actually already a rig consisting of 3 nodes:
polyCube node connected to a shape and a transform node (thats the parent of the shape node just not visible in here as they resides in a separate data structure).
All Maya rigs look something like this, just more complex. Now you can redirect any arrow to almost any nodes attribute that has a type that's compatible. Try middle mouse dragging from transform to poly cube: You can now drive the polycubes attributes by attributes of the transform.
Now many people do not find this intuitive, so Maya has a layer on top of this level. The menus build on this stuff without showing it to you. So you can for example do a skeleton and smooth bind to mesh without worrying about the node network. If however you truly want learn to rig in Maya then you must keep observing the node network and eventually learn to read the node reference. Because you'll need to understand nodes and attributes once you start doing stuff like the size of this node is dependent on the position of this locator.
Personally i find Maya confusing if i do not look in the hypergraph.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 days ago.
Improve this question
We're on our migration path to IaC and I buy into the immutable concept. But my team is asking good questions, like: "you want me to redeploy an entire fleet of VMs just because we need to make a small change? Isn't that overkill? It would be faster if I just ran some remote Powershell to implement a change."
And I get the benefits of immutability. But the questions are pushing me to wonder if we should refine our concept of immutable. In our case, we're looking at Saltstack with Packer building immutable images. A few dozen VMs (plus other infrastructure items that are out of scope for this question). Salt makes having immutable and mutable IaC easy (the latter via applying states). But whatever your tool, where do you draw the line?
Do we go 100% immutable? Every change requires throwing out an old VM and ushering in a new one. No applying changes via states (or any manual method)
Or do we have a hybrid? Possibly a very basic "base" immutable image and everything else gets applied mutably? (Via saltstate in my case.) Losing some benefits of immutability but gaining runtime agility? Note that we'd still have change management in place, so it's not like we lose track of our states.
The answer here may be "it depends." But I'd love to hear if anyone has good strategies that make sense and can be applied consistently. What belongs in the immutable Packer image? What is owned by salt states?
(I realize this is basically an orchestration vs config management argument, but I'm already invested in the former; my question is how much, if any, of the latter to pepper in and what are successful strategies for that. I've read seemingly every article out there on combining the two but none with any good scripture.)
Going to give a half answer to my own question, to close this out.
If you read the internet, immutability is the holy grail of infrastructure and there are only good things to come from it. Everyone on Earth and Mars wants it. But the truth is that not everyone really needs it, or it doesn't work for them (tbf, there are a lot of articles out there saying that you need to weigh the benefits/costs with that strategy). Below are the points we used in our solution:
Immutability is a goal. It's a goal we will never get to 100% on, and if you're starting from scratch in the IaC space, you probably shouldn't set your goals too high on this. We will move more things to an immutable space over time. It's ok to start with 1% immutability.
There is nothing wrong with change management and pushing IaC changes out via something like Salt states. In many cases, this is already going to be a huge improvement over what people in my boat have.
We started with the simplest "base" images created by Packer, installing other apps as needed via Salt. Over time, more of those apps will be built into those Packer images. The key is first building a manageable platform for IaC, and the easiest way to do that is via config management, if you have any sort of existing infrastructure to start with.
So in the end, a hybrid approach seems to work best for us, and I think it will for most shops out there. Unless you have a massive team dedicated to maintaining your IaC setup, a completely immutable platform is likely unrealistic. I hope this helps anyone that was in the same place I was.
I recently (5 weeks ago) started my first school year in an school based apprenticeship to become an IT assistant.
We're learning programming and are starting with very basic processing things, while the ultimate plan is to get into C#.
Now I understand that processing might not be the best language for my little project but I still would like to work this out somehow.
What I want to build is a "Stargate Dial Computer". If you know the TV Show you'll know what I'm talking about.
I wanted to make it visually appealing so I decided to use one of the available tools to create my shapes as I am using a DHD (term from the show) for the dial process - see picture: https://i.imgur.com/r7jBjRG.png
This small shape setup already is over 500 lines of code and that seems unwise in itself. Besides that, the plan is to have every single of these trapezoids be a pushable button - but to achieve that manually I'd have to check their coordinates against the mouse collision to utilize them as buttons.
What I'm asking for now is any input on how to work with these shapes in a logical way to make my Idea even possible.
Something like, checking for the shape's color instead of the shape's coordinates itself like 40 times and getting the "active" shape's size in some kind of function. Or a way to just get every shape one by one in a loop, checking for every beginShape and endShape instance if that wouldn't be a performance nightmare.
Keep in mind that I am a beginner. I do know the basics, also of other languages, and I can apply some programming logic here and there - but since I'm not sure what processing can and can't do (yet) I'm looking for an answer to the question if this is even reasonable or possible, or not.
Any help and ideas would be much appreciated!
Thanks!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I am a CS Research student at UW, and my group is at the point of trying to visualize specific network traffic that is put into a neo4j graph DB in real time.
I have read about many different tools such as gephi, cytoscape, rickshaw (based on D3.js), some others, and D3.js.
We are so far going forward with D3.js, but wanted to get the community opinion. We can't use cytoscape because of neo4j, and feel that D3.js would work the best with semi-large data in a fast real-time environment.
Suggestions?
Perhaps for another question, but also feel free to input: Best way to implement neo4j? Java, Ruby, node.js?
Thank you!
There's not a silver bullet solution for this kind of problem and most depends from what you have in mind to do, the team and the budget (of money and time) you have.
I wouldn't recommend you D3, unless you have to met one of the following:
you want to create a brand new way to visualize your data
you have skilled people in your team - that can be you - with D3
you have already other D3 widgets/viz to integrate
If you don't met any of the entries above I would put D3 on a side, and tell you to have a look at:
SigmaJS, Open Source and free Javascript library.
KeyLines, Commercial Javascript Toolkit.
VivaGraphJS, Open Source and free JS library.
Disclaimer: I'm one of the KeyLines developers.
Depending on the size of the data you have, the choice of the library can change: if you plan to have no more than 3/400 nodes on your chart and not need particular styling/animations then SigmaJS I think is more than fine; if you're looking for something more advanced for styling or animation I would recommend KeyLines, because it is designed to handle this kind of situations (providing an incremental layout) and it does scale up to 2000 nodes with no problems - although I might suggest to have a filter on a side with this size.
I would name VivaGraph as last resort: SigmaJS has a WebGL renderer as well and provide a much nicer rendering IMHO.
VivaGraphJS will be soon replaced with ngraph that will use an agnostic aproach for renders: you can use PIXI, Fabric or whatever you want....
Using a WebGL renderer makes sense when you load your assets once and reuse them all the time: if you're styling your chart elements in a real-time scenario there's not advantage on Canvas IMHO.
My understanding: Gephi doesn't do well with real-time updates; it's usually used on static data.
One major consideration - what is the visualization you wish to present? Is it a directed graph? Cyclic? Weighted? Additional labels?
Some toolkits are 'fixed' in what they can display, but make it easy to present a graph. Others (like d3) are very extensible, so you could create just about anything.
For the purposes of the StackOverflow format, you might get better answers if you can pin down the limitations and needs of your system (actual data rate, thin/thick client, type of viz, etc)
check out vivagraph which uses webgl for the rendering and scales really well also for larger networks. They have some nice examples for really large ones (FB, Amazon).
http://github.com/anvaka/VivaGraphJS
I think D3 is great, however, recently, there was a talk on Sigma.js at FOSDEM, explaing that is scales better for bigger graphs. See also http://thewhyaxis.info/hairball/
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
In Bruce Tognazzini's quiz on Fitt's Law, the question discussing the bottleneck in the hierarchical menu (as used in almost every modern desktop UI), talks about his design for the original Mac:
The bottleneck is the passage between
the first-level menu and the
second-level menu. Users first slide
the mouse pointer down to the category
menu item. Then, they must carefully
slide the mouse directly across
(horizontally) in order to move the
pointer into the secondary menu.
The engineer who originally designed
hierarchicals apparently had his
forearm mounted on a track so that he
could move it perfectly in a
horizontal direction without any
vertical component. Most of us,
however, have our forarms mounted on a
pivot we like to call our elbow. That
means that moving our hand describes
an arc, rather than a straight line.
Demanding that pivoted people move a
mouse pointer along in a straight line
horizontally is just wrong. We are
naturally going to slip downward even
as we try to slide sideways. When we
are not allowed to slip downward, the
menu we're after is going to slam shut
just before we get there.
The Windows folks tried to overcome
the pivot problem with a hack: If they
see the user move down into range of
the next item on the primary menu,
they don't instantly close the
second-level menu. Instead, they leave
it open for around a half second, so,
if users are really quick, they can be
inaccurate but still get into the
second-level menu before it slams
shut. Unfortunately, people's
reactions to heightened chance of
error is to slow down, rather than
speed up, a well-established
phenomenon. Therefore, few users will
ever figure out that moving faster
could solve their problem. Microsoft's
solution is exactly wrong.
When I specified the Mac hierarchical
menu algorthm in the mid-'80s, I
called for a buffer zone shaped like a
<, so that users could make an
increasingly-greater error as they
neared the hierarchical without fear
of jumping to an unwanted menu. As
long as the user's pointer was moving
a few pixels over for every one down,
on average, the menu stayed open, no
matter how slow they moved.
(Cancelling was still really easy;
just deliberately move up or down.)
This just blew me away! Such a simple idea which would result in a huge improvement in usability. I'm sure I'm not the only one who regularly has the next level of a menu slam shut because I don't move the mouse pointer in a perfectly horizontal line.
So my question is: Are there any modern UI toolkits which implement this brilliant idea of a < shaped buffer zone in hierarchical menus? And if not, why not?!
No mainstream GUI toolkit (Win32, MFC, Cocoa, GTK, KDE, FOX, FLTK) does it.
In fact the menu handlinge is usually so terrible featureless and bad implemented that you have to wonder why nobody improves it in any way.
Apple and GTK are the worst toolkits here.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In my spare time I program games as a hobby, different sorts of things and currently nothing very complex. Things likes 2d shooters, Tile-Based games, Puzzle Games, etc...
However as the development of these games goes on I find it becomes hard to manage the complexity of different subsystems within the games, things like Interface, World View/Model, Event Handling, States (Menu's, Pause, etc...), Special Effects and so on.
I attempt to keep the connections to a minimum and reduce coupling however many of these systems need to talk in one way or another that doesn't require holding your entire code-base in your head at one time.
Currently I try to delegate different subsystems and subsystem functions to different objects that are aggregated together however I haven't found a communication strategy that is decoupled enough.
What sort of techniques can I use to help me juggle all of these different subsystems and handle the complexity of an ever increasing system that needs to be modular enough to facilitate rapid requirements change?
I often find myself asking the same questions:
How do objects communicate with each other?
Where should the code that handles specific subsystems go?
How much of my code base should I have to think about at one time?
How can I reduce coupling between game entities?
Ah, if only there were a good answer to your question. Then game development wouldn't be nearly as difficult, risky, and time-consuming.
I attempt to keep the connections to a
minimum and reduce coupling however
many of these systems need to talk in
one way or another that doesn't
require holding your entire code-base
in your head at one time.
They do, but often they don't need to talk in quite as direct a way as people first believe. For example, it's common to have the game state push values into its GUI whenever something changes. If instead you can just store values and let the GUI query them (perhaps via an observer pattern), you have then removed all GUI references from the game state. Often it's enough to simply ask whether a subsystem can pull the information it needs from a simple interface instead of having to push the data in.
How do objects communicate with each other?
Where should the code that handles specific subsystems go?
How much of my code base should I have to think about at one time?
How can I reduce coupling between game entities?
None of this is really specific to games, but it's a problem that arises often with games because there are so many disparate subsystems that we've not yet developed standard approaches to. If you take web development then there are really just a small number of established paradigms: the "one template/code file per URI" of something like PHP, or maybe the "model/view-template/controller" approach of RoR, Django, plus a couple of others. But for games, everybody is rolling their own.
But one thing is clear: you can't solve the problem by asking 'How do objects communicate'. There are many different types of object and they require different approaches. Don't try and find one global solution to fit every part of your game - input, networking, audio, physics, artificial intelligence, rendering, serialisation - it's not going to happen. If you try to write any application by trying to come up with a perfect IObject interface that will suit every purpose then you'll fail. Solve individual problems first and then look for the commonality, refactoring as you go. Your code must first be usable before it can be even considered to be reusable.
Game subsystems live at whatever level they need to, no higher. Typically I have a top level App, which owns the Graphics, Sound, Input, and Game objects (among others). The Game object owns the Map or World, the Players, the non-players, the things that define those objects, etc.
Distinct game states can be a bit tricky but they're actually not as important as people assume they are. Pause can be coded as a boolean which, when set, simply disables AI/physics updates. Menus can be coded as simple GUI overlays. So your 'menu state' merely becomes a case of pausing the game and showing the menu, and unpausing the game when the menu is closed - no explicit state management required.
Reducing coupling between game entities is pretty easy, again as long as you don't have an amorphous idea of what a game entity is that leads to everything needing to potentially talk to everything. Game characters typically live within a Map or a World, which is essentially a spatial database (among other things) and can ask the World to tell it about nearby characters and objects, without ever needing to hold direct references to them.
Overall though you just have to use good software development rules for your code. The main thing is to keep interfaces small, simple, and focused on one and only one aspect. Loose coupling and the ability to focus on smaller areas of the code flows naturally from that.