Techniques to Reduce Complexity in Game Programming [closed] - complexity-theory

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In my spare time I program games as a hobby, different sorts of things and currently nothing very complex. Things likes 2d shooters, Tile-Based games, Puzzle Games, etc...
However as the development of these games goes on I find it becomes hard to manage the complexity of different subsystems within the games, things like Interface, World View/Model, Event Handling, States (Menu's, Pause, etc...), Special Effects and so on.
I attempt to keep the connections to a minimum and reduce coupling however many of these systems need to talk in one way or another that doesn't require holding your entire code-base in your head at one time.
Currently I try to delegate different subsystems and subsystem functions to different objects that are aggregated together however I haven't found a communication strategy that is decoupled enough.
What sort of techniques can I use to help me juggle all of these different subsystems and handle the complexity of an ever increasing system that needs to be modular enough to facilitate rapid requirements change?
I often find myself asking the same questions:
How do objects communicate with each other?
Where should the code that handles specific subsystems go?
How much of my code base should I have to think about at one time?
How can I reduce coupling between game entities?

Ah, if only there were a good answer to your question. Then game development wouldn't be nearly as difficult, risky, and time-consuming.
I attempt to keep the connections to a
minimum and reduce coupling however
many of these systems need to talk in
one way or another that doesn't
require holding your entire code-base
in your head at one time.
They do, but often they don't need to talk in quite as direct a way as people first believe. For example, it's common to have the game state push values into its GUI whenever something changes. If instead you can just store values and let the GUI query them (perhaps via an observer pattern), you have then removed all GUI references from the game state. Often it's enough to simply ask whether a subsystem can pull the information it needs from a simple interface instead of having to push the data in.
How do objects communicate with each other?
Where should the code that handles specific subsystems go?
How much of my code base should I have to think about at one time?
How can I reduce coupling between game entities?
None of this is really specific to games, but it's a problem that arises often with games because there are so many disparate subsystems that we've not yet developed standard approaches to. If you take web development then there are really just a small number of established paradigms: the "one template/code file per URI" of something like PHP, or maybe the "model/view-template/controller" approach of RoR, Django, plus a couple of others. But for games, everybody is rolling their own.
But one thing is clear: you can't solve the problem by asking 'How do objects communicate'. There are many different types of object and they require different approaches. Don't try and find one global solution to fit every part of your game - input, networking, audio, physics, artificial intelligence, rendering, serialisation - it's not going to happen. If you try to write any application by trying to come up with a perfect IObject interface that will suit every purpose then you'll fail. Solve individual problems first and then look for the commonality, refactoring as you go. Your code must first be usable before it can be even considered to be reusable.
Game subsystems live at whatever level they need to, no higher. Typically I have a top level App, which owns the Graphics, Sound, Input, and Game objects (among others). The Game object owns the Map or World, the Players, the non-players, the things that define those objects, etc.
Distinct game states can be a bit tricky but they're actually not as important as people assume they are. Pause can be coded as a boolean which, when set, simply disables AI/physics updates. Menus can be coded as simple GUI overlays. So your 'menu state' merely becomes a case of pausing the game and showing the menu, and unpausing the game when the menu is closed - no explicit state management required.
Reducing coupling between game entities is pretty easy, again as long as you don't have an amorphous idea of what a game entity is that leads to everything needing to potentially talk to everything. Game characters typically live within a Map or a World, which is essentially a spatial database (among other things) and can ask the World to tell it about nearby characters and objects, without ever needing to hold direct references to them.
Overall though you just have to use good software development rules for your code. The main thing is to keep interfaces small, simple, and focused on one and only one aspect. Loose coupling and the ability to focus on smaller areas of the code flows naturally from that.

Related

How Many "Layer" Types Are There in Software Development? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a conceptual question and believe Stack Overflow may be a great place to ask it. Often in my reading and learning I hear conversation of "layers" - and loosely defined layers, at that, it seems. Too, a lot of software jargon is used in describing these layers (so, for an entry-level guy or gal (like me), they each become difficult to get a grasp of).
So far, I have heard tell of three (3) layers that interact with each other and that make a program "run":
Business Logic Layer (BLL) - Wikipedia's got a great introductory article to this/
Application Logic Layer - that which is responsible for enabling the business logic layer to interact with boundary technologies
Presentation Layer (the layer I just learned of, today).
Here's my question (and it may not be possible to answer): how many of these layer "types" exist and - briefly - what are they?
I also encourage posting resources via hyperlink that people can consult, particularly as these layers do seem to be so loosely defined.
In this particular case, the layers are essentially a generalization of the Model View Controller system architecture. Briefly, they are:
BLL: The 'guts' of the application, if you will. This is the logic that works on your own internal data types and does, as the name suggests, the 'business logic' that makes your application actually work. It should be as independent as possible from any particular implementation of how the user might interact with the business logic. For example, if you were building a Tic Tac Toe game, the BLL would be the rules engine that does the actual work of tracking people's moves, determining who wins, etc.
ALL: The 'interface' between the BLL and all of the stuff that feeds data into the BLL. This is going to be much more specific to a particular application of the BLL. For example, if your Tic Tac Toe engine had the ability to save out the results of games, the ALL might provide an interface to a database engine that stores the results. It would also be responsable for interfacing the BLL with whatever PL you chose to use (text based, a GUI, etc).
PL: This is the 'front end' of your application that the user interacts with. This could be built from a complicated framework like Qt, or something simple like a text interface. Either way, it's generally completely independent in terms of implementation from the application it's trying to expose. To continue the Tic Tac Toe analogy, you could build a relatively generic 3x3 grid with shapes and a message output that just happens, via the ALL, to wind up displaying a Tic Tac Toe game. Now in practice they're rarely that decoupled, but the point is that you should try and keep any actual logic out of the PL, such that you can change the implementation of the BLL or ALL without having to touch your PL code as long as the interfaces stay the same.
These layers are loosely defined because they're just generalizations used to make visualizing the design of complex systems easier to reason about and to naturally guide development towards proper compartmentalization of functionality for maximum code reuse and component swappability, as well as allowing for more easily performing formal Verification and Validation and other QA tasks. There are many, many ways you can split up software design into 'layers', but these three are the ones that are typically formally described in Soft Eng course material. So there isn't really any specific 'number of layers' that you can talk about. How segmented a particular design is really depends on that specific domain and the 'buzzwords' in the industry at the time. You will however almost always find something akin to these three layers, sometimes broken into a few smaller chunks.
It is strange to me that the three concepts you list should be called "layers". They may be three components of a software system but they don't appear to lie on top of each other except in a crude PowerPoint block diagram.
The idea of layering relates to levels of abstraction in code development where lower layers provide the components used to build higher layers. The OSI model is a paradigm example.
There can be any number of layers in a particular software system depending on how many levels of dependency exist. Perhaps there are only one or two layers within the immediate application being developed, though that application is likely to be dependent on a larger stack of assumed sub-structure.

Organizing code, logical layout of segmented files [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have known enough about programming to get me in trouble for about 10 years now. I have no formal education, though I've read many books on the subject for various languages. The language I am primarily focused on now would be PHP, at least for the scale of things I am doing now.
I have used some OOP classes for a while, but never took the dive into understanding the principles behind the scenes. I am still not at the level I would like to be expression-wise, but my recent reading of the book The OOP Thought Process has left me wanting to advance my programming skills.
With motivation from the new concepts, I've started a new project. I've coded some re-usable classes that deal with user auth, user profiles, database interfacing, and some other stuff I use regularly on most projects.
Now having split my typical garbled spaghetti-bowl mess of code into somewhat organized files, I've been having some problems when it comes to making sure files are all included when they need to be, how to logically divide the scripts up into classes, and how segmented I should be making each class.
What I'm really asking for is advice or suggested reading that focuses not on specific functions and formats of code, but on the logical layout of projects that are larger than just a hobby project.
I want to learn how to do things properly, and while I am still learning in some areas, this is something that I have no clue about other than just being creative, and trial/error. Mostly error.
Thanks for any replies. This place is great.
I will try to express my experience from previous projects here, maybe they help you.
If you try to segment your project, try to find components that might be useful as themselves. If you write e.g. a database layer, think about what to do to make the database layer independent from the rest of your application (except utility classes and configuration). If you write a RichClient that accesses the database layer, try to put exactly the stuff you need in it which you would also need when accessing it from a Web layer. Now you have a component that would maybe even useful in a command line client.
Or if you want to see this from a lower level, split your application into small units, and don't let these units have circular dependencies!!! If two components have circular dependencies, they cannot be splitted and should really be a single component. I know, I broke this rule some times because you cannot hold it up always, but it is a good rule to get understanding of the building blocks of your app.
Another popular dividing rule of an application is the Model-View-Control Pattern (MVC), which states that the Model (the data classes), the Control (the logic of your program) and the View (The graphical user interface) should be splittet into distinct packages. While I adhere to this, I would divide my code like this. Within each package I have distinct model, view and control classes, but the model classes don't know anything about the control layer, and the control layer does not know anything about the GUI.
Since GUI development is always tedious, and therefore often the least tested part (at least in unit tests) of an application, splitting it from the control make writing the buisiness logic a lot easier. In fact, it lets you concentrate on what you have to do, which is getting work done, which you code in buisiness logic. If this part works, you can spend time writing a nice GUI for it. Of course will the GUI and ease of use often bring their own requirements to the control, but at least its loosely coupled.
In my current large project we have some bit components, which we now see as independent products, which get used by the really product. This makes testing and writing the independent component easier for the person in charge, and gives everyone a more stable component.
Just my 2ยข.

How would you use AGILE here? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am a big proponent of agile, but a friend of mine (who doesn't know agile yet - hes a managerial type ^^) asked me how I would go about planning and developing a complex distributed project, with a database layer, comms layer, interface, and integration into embedded devices.
The agile method emphasises the concept of releasing early and iterating, but in the scenario of a project with many inter-connected components that all need to be functional for the whole thing to work, it would be difficult to release an early version without working on all the components. How would agile help my friend here? How best would he utilize it?
Teams in my company face the same types of problems. We are building projects with a large number of moving parts and architectural layers that make it difficult to create a working product early on. Additionally, there are often specialty resources that need to be scheduled or slightly out of synch with the team. Some approaches we've taken are below It has been challenging, but these approaches seem to be helping.
Build as vertically as possible
In other words, strive to have something working, end to end as quickly as possible. We typically get there a few sprints in on a 9-16 month project.
You'll often find a significant number of layers can be mocked or held back.
Often, the initial customer facing components are place holders. We create a limited bit of functionality that is something like what the customer wants, but is likely to be very different in the final project. This allows us to prove the rest of the product at a system level and provide visibility from a system perspective.
Separate base architecture from the product
Our early sprints are often centered around infrastructure/architecture. For example, threading subsystems, performance monitoring, communications and test frameworks.
Treat the subsystems as separate deliverables
Fully define each subsystem
Complete (truly complete, not just a partial implementation) each subsystem
Load test each subsystem within the context of how it will be used in the final product
Make your first iteration to be dedicated to architectural design, including the identification of the necessary components and the definition of the relationships and communications between them.
Once you have a clear picture of how the components interact, build the skeleton of each one. That is, implement "stub" components that just have the communication part on place, and the rest of the functionnality just do nothing or return test data. Have an interation dedicated to this task (including testing the component communication mechanisms) as well.
Then you can plan iterations to fully develop each component in the appropriate order, so that the system can grow in a ordered way.
TDD - iterate with incomplete parts after writing tests. Mock the bits that aren't ready. Sounds exciting.
It is unlikely that each layer needs to be complete for it to be usable by the other layers - for example the persistence layer could just serialize objects to a file initially, and converted to use a database when the need arises. I would look at implementing the minimum of each layer needed by the initial stories and fleshed out to add functionality as the the system grows.
Growing a system this way means you only implement the functionality you need, rather than all the functionality you think you may need at some indeterminate time in the future.
If you cannot break the large project into smaller parts that are useful (i.e. enable some use cases) on their own, agile probably won't help you that much in this project. You can pick some techniques like pair programming, refactoring etc. but the overall planning has do be done in a conventional way.

Why is UI programming so time consuming, and what can you do to mitigate this? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In my experience, UI programming is very time consuming, expensive (designers, graphics, etc) and error prone - and by definition UI bugs or glitches are very visible embarasing.
What do you do to mitigate this problem?
Do you know of a solution that can automatically convert an API to a user interface (preferably a Web user interface?).
Probably something like a JMX console
with good defaults
can be tweaked with css
where fields can be configured to be radio button or drop down list, text field or text area, etc
localizable
etc
Developing UI is time consuming and error-prone because it involves design. Not just visual or sound design, but more importantly interaction design. A good API is always interaction model neutral, meaning it puts minimal constraints on actual workflow, localisation and info representation. The main driver behind this is encapsulation and code re-use.
As a result it is impossible to extract enough information from API alone to construct a good user interface tailored to a specific case of API use.
However there are UI generators that normally produce CRUD screens based on a given API. Needless to say that such generated UI's are not very well-suited for frequent users with demands for higher UI efficiency, nor are they particularly easy to learn in case of a larger system since they don't really communicate system image or interaction sequence well.
It takes a lot of effort to create a good UI because it needs to be designed according to specific user needs and not because of some mundane API-UI conversion task that can be fully automated.
To speed the process of building UI up and mitigate risks it is possible to suggest either involving UI professionals or learning more about the job yourself. Unfortunatelly, there is no shortcut or magic wand, so to speak that will produce a quality UI based entirely and only on an API without lots of additional info and analysis.
Please also see an excellent question: "Why is good UI design so hard for some developers?" that has some very insightful and valuable answers, specifically:
Shameless plug for my own answer.
Great answer by Karl Fast.
I don't believe UI programming is more time consuming than any other sort of programming, nor is it more error prone. However, bugs in the UI are often more obvious. Spotting an error in a compiler is often much more tricky.
One clear difference between UI programming is that you have a person at the other end, instead of another program, which is very often the case when you're writing compilers, protocol parsers, debuggers, and other code which talks to other programs and computers. This means that the entity you're communicating with is not well-specified and may behave very erratically.
EDIT: "unpredictable" is probably a more appropriate term. /Jesper
Your question of converting an API to a user interface just doesn't make sense to me. What are you talking about?
Looks like you are looking for the 'Naked Objects' Architectual pattern. There are various implementations available.
http://en.wikipedia.org/wiki/Naked_objects
I'm not providing a solution, but I'll attempt to answer the why.
So I don't speak for everyone, but for me at least, I believe one reason is because programmers tend to concentrate on functionality more so than usability and they tend not to be too artistic. I think they just tend to have a different type of creativity. I find that it takes me a long to time to create the right graphics, compared to how long it takes me to write the code (Though, for the most part, I haven't done any projects with too many graphical requirements).
Automatically generating user interfaces may be possible to some extent, in that it can generate controls for the required input and output of data. But UI design is much more involved than simply putting the required controls onto a screen. In order to create a usable, user friendly UI, knowledge from disciplines such as graphics design, ergonomics, psychology, etc. has to be combined. There is a reason that human-computer interaction is becoming a discipline of its own: its not trivial to create a decent UI.
So I don't think there's a real solution to your problem. UI design is a complex task that simply takes time to do properly. The only area where it is relatively easy to win some time is with the tooling: if you have powerful tools to implement the design of the user interface, you don't have to hand-code every pixel of the UI yourself.
You are absolutely correct when you say that UI is time consuming, costly and error prone!
A great compromise I have found is as follows...
I realized that a lot of data (if not most) can be presented using a simple table (such as a JTable), rather than continuously try to create custom panels and fancy GUI's. It doesn't seem obvious at first, but it's quite decent, usable and visually appealing.
Why is it so fast? Because I was able to create a reusable framework which can accept a collection of concrete models and with little to no effort can render all these models within the table. So much code-reuse, its unbelievable.
By adding a toolbar above the window, my framework can add to, remove from or edit entries in the table. Using the full power of JTables, I can hide (by filtering) and sort as needed by extending various classes (but only if/when this is required).
I find myself reusing a heck of a lot of code every time I want to display and manage new models. I make extensive use of icons (per column, rows or cells, etc) to beautify the screens. I use large icons as a window header to make each screen 'appear' different and appealing and it always looks like new and different screens, but its always the same code behind them.
A lot of work and effort was required at first to do the framework, but now its paying off big time.
I can write the GUI for an entirely new application with as many as 30 to 50 different models, consisting of as many screens in a fraction of the time it would take me using the 'custom UI method'.
I would recommend you evaluate and explore this approach!
if you already know or could learn to use Ruby on Rails, ActiveScaffold is excellent for this.
One reason is that we don't have a well-developed pattern for UTDD - User Test Driven Development. Nor have I seen many good examples of mapping User Stories to Unit Tests. Why, for example, do so few tutorials discuss User Stories?
ASP.NET Dynamic Data is something that you should investigate. It meets most, if not all your requirements
It's hard because most users/customers are dumb and can't think straight! :)
It's time consuming because UI devs/designers are so obsessive-compulsive! :)

What are the tipping points for team size vs process overhead? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
At what point in a team's growth must process change drastically? A lone coder can get away with source control and a brain. A team trying to ship large prepackaged software to local and international markets must have a bit more in place.
If you've experienced a large transition in 'process': Was the team's process successfully changed with the current members or was the team itself mostly replaced by the time the process change came? What were the important aspects that changed, were some unnecessary?
You are going to find it hard to get a quantitative answer to this. It really depends on how "wide" the project is. If there is one simple subsystem to work on, any more than 1 person will result in people stepping on other people's toes. If you somehow have a project with 8 segregated subsystems, then you can have 8 developers no problem. More likely, you will have several subsystems that depend on each other to varying degrees. Oh, and all of this is true regardless of the process you are using.
Choosing what type of process you want to use (spiral, scrum, waterfall, CMM, etc.), and how heavyweight a version of that process you want to implement is, is another problem, and it's difficult. This is mainly because people try to take processes that work in building construction, factory work, or some other industry that is nothing like software and apply it to software development.
In my opinion, McConnell writes the best books on software process, even though none of his books are process books, per se.
If memory serves me correctly anything above five people is where things get dicey. The number of paths of comunication between the team gets really large after that.
(2 people = 1 path, 3 = 3 paths, 4 = 6 paths, 5 = 10 paths and so on).
The last three places I've been the IT team went through a massive process change. Yes, you will lose people, probably some of the better ones too. it's not that they are stubborn and trying to stick to the old ways, it's just that a change like this will cause a mass amount of stress. There are deadlines to hit and a need for quality to be met. People will get confused about what process they are supposed to do, and many will fall back to the "old ways." (i've been guilty of this too I admit.)
The key to succeeding is to take it slow and in small steps. People need to take time to understand why the process is changing and how it benefits them. That is huge, if you don't take time to do this, it won't succeed, and key people will end up quitting causing turmoil.
One of the things to absolutely remember is that ultimately some turnover is good. It brings new ideas and people with different (and sometimes better) skill sets. You shouldn't try and force change onto people rapidly, but they shouldn't be a barrier either. If they don't agree with what is going on, they should either try and come to a middle ground with the people making the process or leave. One of the real eye openers I learned at my first job is that in reality everyone is replaceable. Someone will eventually be able to step in take the reigns.
In my experience this transition occurs at exactly the moment at which you also need management. It is hard to get above 8 developers without some over-arching coordinating function being in place, whether it is a team lead, segregation of tasks or old fashioned management. The reality I have witnessed is that even with the best, most talented, most bought-in developers you still need coordination when you get above 8 working concurrently.
And there is a discontinuous step in process as you cross that boundary. However it needn't be catastrophic. The best approach is for the team to adopt as much formal process as it can when still small so that all the necessary behaviour and infrastructure is in place. I would argue that this is synonymous with good development in any case, so even the lone developer ought to have it (source code control, unit tests and coding standards are all examples of what I am talking about). If you take this approach then the step up in process when it occurs is not so much a jolt as a rigorous coordination.
Every developer you add need to be brought in with the process already in place. When you get to 8 (or whatever the number turns out being for you) then you'll find that your team meetings get a little too loose and wordy, personalities start playing a part and it is more effective to divide activity up. At that moment your boss (or you if you are the boss) will need to appoint one or more coordinators to distribute and control work.
This process (should) scales up as well if you stick to your processes. Teams can sub-divide or you can bud teams out to perform functional tasks if necessary. This approach also works regardless of the methodology you have chosen for your project management, be it Agile or not.
Once you get up to 4 or 5 teams, i.e. 30-50 people then you will almost certainly need people who are happy that their sole task is coordination.
If you are small now and contemplating or expecting the complexity shift, then make sure you get your fundamental processes nailed down immediately and certainly before you start adding more staff.
HTH and good luck
A lot depends on the people working on the project, the integration points, etc. Even a single coder should have a code management tool and needs to interact with clients or the 'boss'. From experience, I've seen a change once there are more then 2 people and then probably any increase of 50%. If teams are organized by project teams, focused on decoupled parts of the product, the overhead increase will not exponentially increase as the team size increases (project vs. matrix organizations).

Resources