Related
I recently (5 weeks ago) started my first school year in an school based apprenticeship to become an IT assistant.
We're learning programming and are starting with very basic processing things, while the ultimate plan is to get into C#.
Now I understand that processing might not be the best language for my little project but I still would like to work this out somehow.
What I want to build is a "Stargate Dial Computer". If you know the TV Show you'll know what I'm talking about.
I wanted to make it visually appealing so I decided to use one of the available tools to create my shapes as I am using a DHD (term from the show) for the dial process - see picture: https://i.imgur.com/r7jBjRG.png
This small shape setup already is over 500 lines of code and that seems unwise in itself. Besides that, the plan is to have every single of these trapezoids be a pushable button - but to achieve that manually I'd have to check their coordinates against the mouse collision to utilize them as buttons.
What I'm asking for now is any input on how to work with these shapes in a logical way to make my Idea even possible.
Something like, checking for the shape's color instead of the shape's coordinates itself like 40 times and getting the "active" shape's size in some kind of function. Or a way to just get every shape one by one in a loop, checking for every beginShape and endShape instance if that wouldn't be a performance nightmare.
Keep in mind that I am a beginner. I do know the basics, also of other languages, and I can apply some programming logic here and there - but since I'm not sure what processing can and can't do (yet) I'm looking for an answer to the question if this is even reasonable or possible, or not.
Any help and ideas would be much appreciated!
Thanks!
I'm reading Randall Hyde's Write great code (volume 2) and I found this:
[...] it’s not good programming practice to create monolithic
applications, where all the source code appears in one source file (or is processed by a single compilation) [...]
I was wondering, why is this so bad?
Thanks everyone for your answers, I really wish to accept more of them, but I've chosen the most synthetic, so that who reads this question finds immediately the essentials.
Thanks guys ;)
The main reason, more important in my opinion than the sheer size of the file in question, is the lack of modularity. Software becomes easy to maintain if you have it broken down into small pieces whose interactions with each other are through a few well defined places, such as a few public methods or a published API. If you put everything in one big file, the tendency is to have everything dependent on the internals of everything else, which leads to maintenance nightmares.
Early in my career, I worked on a Geographic Information System that consisted of over a million lines of C. The only thing that made it maintainable, the only thing that made it work in the first place is that we had a sharp dividing line between everything "above" and everything "below". The "above" code implemented user interfaces, application specific processing, etc, and everything "below' implemented the spatial database. And the dividing line was a published API. If you were working "above", you didn't need to know how the "below" code worked, as long as it followed the published API. If you were working "below", you didn't care how your code was being used, as long as you implemented the published API. At one point, we even replaced a huge chunk of the "below" side that had stored stuff in proprietary files with tables in SQL databases, and the "above" code didn't have to know or care.
Because everything gets so crammed.
If you have separate files for separate things then you'll be able to find and edit it much faster.
Also, if there's an error, you can locate it more easily.
Because you quickly get (tens of) thousands of lines of unmaintainable code.
Impossible to read, impossible to maintain, impossible to extend.
Also, it's good for team work, since there are a lot of small files which can be edited in parallel by different developers.
Some reasons:
It's difficult to understand something that's all jammed together like that. Breaking
the logic into functions makes it easier to follow.
Factoring the logic into discrete components allows you to re-use those components elsewhere. Sometimes "elsewhere" means "elsewhere in the same program."
You cannot possibly unit-test something that's built in a monolithic fashion. Breaking
a huge function (or program) into pieces allows you to test those pieces individually.
While I understand that HTML and CSS aren't 'programming languages,' I suppose you could look at it in terms of having all of the .css, .js and .html all on one page: It makes it difficult to recycle or debug code.
Including all the source code in a single file makes it hard to manage code. The better approach should be to divide your program in separate modules. Each module should have its own source file and all the modules should be linked finally to create the executable. This helps a lot in maintaining code and makes it easy to find module specific bugs.
Moreover, if you have 2 or more people working simultaneously on the same project, there is a high probability that you will be wasting lots of your time in resolving code conflicts.
Also if you have to test individual components (or modules), you can do it easily without bothering about the other independent modules (if any).
Because decomposition is a computer science fundamental: Solve large problems by decomposing them into smaller pieces. Not only are the smaller problems more manageable, but they make it easier to keep a picture of the entire problem in your head.
When you talk about a monolithic program in a single file, all I can think of are the huge spaghetti code that found its way into COBOL and FORTRAN way back in the day. It encourages a cut and paste mentality that only gets worse with time.
Object-oriented languages are an attempt to help with this. OO languages decompose problems software components, with data and functions encapsulated together, that map to our mental models of how the world works.
Think about it in terms of the Single Responsibility Principle. If that one large file, or one monolithic application, or one "big something" is responsible for everything concerning that system, it's a catch-all bucket of functionality. That functionality should be separated into its discrete components for maintainability, re-usability, etc.
Programmers do a thousand line of codes right, if you do things on separate side. you can easily manage and locate your files.
Apart from all the above mentioned reasons, typically all the text from a compilation unit is included in final program. However if you split things up and it turns out all code in particular file is not getting used it will not be linked. Even if its getting linked it might be easy to turn it into a DLL should you want to decide using the functionality at run time. This facilitates dependency management, reduces build times by compiling only the modified source file leading to greater maintainability and productivity.
From some really bad experience I can tell you that it's unreadable. Function after function of code, you can never understand what the programmer meant or how to find your way out of it.
You cannot even scroll in the file because a little movement of the scroll bar scrolls two pages of code.
Right now I'm rewriting an entire application because the original programmers thought it a great idea to create 3000-line code files. It just cannot be maintained. And of course it's totally untestable.
I think you have never worked in big company that has 6000 lines of code in one file... and every couple thousand lines you see comments like /** Do not modify this block, it's critical */ and you think the bug assigned to you is coming from that block. And your manager says, "yeah look into that file, it's somewhere in it."
And the code breaks all those beautiful OOP concepts, has dirty switches. And like fifty contributors.
Lucky you.
There has been one more question on what data-oriented design is, and there's one article which is often referred to (and I've read it like 5 or 6 times already). I understand the general concept of this, especially when dealing with, for example, 3d models, where you'd like to keep all vertexes together, and not pollute your faces with normals, etc.
However, I do have a hard time visualizing how data-oriented design might work for anything but the most trivial cases (3d models, particles, BSP-trees, and so on). Are there any good examples out there which really embraces data-oriented design and shows how this might work in practice? I can plow through large code-bases if needed.
What I'm especially interested in is the mantra "where there's one there are many", which I can't really seem to connect with the rest here. Yes, there are always more than one enemy, yet, you still need to update each enemy individually, cause they're not moving the same way now are they? Same goes for the 'balls'-example in the accepted answer to the question above (I actually asked this in a comment to that answer, but haven't gotten a reply yet). Is it simply that the rendering will only need the positions, and not the velocities, whereas the game simulation will need both, but not the materials? Or am I missing something? Perhaps I've already understood it and it's a far more straightforward concept than I thought.
Any pointers would be much appreciated!
So, what is DOD all about? Obviously, it's about performance, but it's not just that. It's also about well-designed code that is readable, easy to understand and even reusable.
Now Object Oriented design is all about designing code and data to fit into encapsulated virtual "objects". Each object is a seperate entity with variables for properties that object might have and methods to take action on itself or other objects in the world. The advantage of OO design is that it's easy to mentally model your code into objects because the whole (real) world around us seems to work in the same way. Objects with properties that can interact with each other.
Now the problem is that the cpu in your computer works in a completely different way. It works best when you let it do the same things again and again. Why is that? Because of a little thing called cache. Accessing RAM on a modern computer can take 100 or 200 CPU cycles (and the CPU has to wait all that time!), which is way too long. So there's this small portion of memory on the CPU that can be accessed really quickly, cache memory. Problem is it's only a few MB tops. So every time you need data that wasn't in cache, you still need to go the long way to RAM. That's not just that way for data, the same goes for code. Trying to execute a function that's not in instruction cache will cause a stall while the code is loaded from RAM.
Back to OO programming. Objects are big, but most functions need only a small portion of that data, so we're wasting cache by loading unnecessary data. Methods call other methods which call other methods, thrashing your instruction cache. Still, we often do a lot of the same stuff over and over again. Let's take a bullet from a game for example. In a naive implementation each bullet could be a separate object. There might be a bullet manager class. It calls the first bullet's update function. It updates the 3D position using the direction/velocity. This causes a lot of other data from the object to be loaded into the cache. Next, we call the World Manager class to check for a collision with other objects. This loads lots of other stuff into the cache, maybe it even causes code from the original bullet manager class to get dropped from instruction cache. Now we return to the bullet update, there was no collision, so we return to bullet manager. It might need to load some code again. Next up, bullet #2 update. This loads lots of data into the cache, calls world... etc. So in this hypthetical situation, we've got 2 stalls for loading code and let's say 2 stalls for loading data. That's at least 400 cycles wasted, for 1 bullet, and we haven't taken bullets that hit something else into account. Now a CPU runs at 3+ GHz so we're not going to notice one bullet, but what if we've got 100 bullets? Or even more?
So this is the where there's one there's many story. Yes, there are some cases where you've only got on object, your manager classes, file access, etc. But more often, there's a lot of similar cases. Naive, or even not-naive object oriented design will lead to lots of problems. So enter data oriented design. The key of DOD is to model your code around your data, not the other way around as with OO-design. This starts at the first stages of design. You do not first design your OO code and then optimize it. You start by listing and examining your data and thinking out how you want to modify it(I'll get to a practical example in a moment). Once you know how your code is going to modify the data you can lay it out in a way that makes it as efficient as possible to process it. Now you may think this can only lead to a horrible soup of code and data everywhere but that is only the case if you design it badly (bad design is just as easy with OO programming). If you design it well, code and data can be neatly designed around specific functionality, leading to very readable and even very reusable code.
So back to our bullets. Instead of creating a class for each bullet, we only keep the bullet manager. Each bullet has a position and a velocity. Each bullet's position needs to be updated. Each bullet has to have a collision check and all bullets that have hit something need to take some action accordingly. So just by taking a look at this description I can design this whole system in a much better way. Let's put the positions of all bullets in an array/vector. Let's put the velocity of all bullets in an array/vector. Now let's start by iterating allong those two arrays and updating each position value with it's corresponding velocity. Now, all data loaded into the data cache is data we're going to use. We can even put a smart pre-loading command to already pre-load some array data in advance so the data's in cache when we get to it. Next, collision check. I'm not going into detail here, but you can imagine how updating all bullets after each other can help. Also note that if there's a collision, we're not going to call a new function or do anything. We just keep a vector with all bullets that had collision and when collision checking is done, we can update all those after each other. See how we just went from lots of memory access to almost none by laying our data out differently? Did you also notice how our code and data, even though not designed in an OO way any more, are still easy to understand and easy to reuse?
So to get back to the "where there's one there's many". When designing OO code you think about one object, the prototype/class. A bullet has a velocity, a bullet has a position, a bullet will move each frame by it's velocity, a bullet can hit something, etc. When you think about that, you will think about a class, with velocity, position, and an update function which moves the bullet and checks for collision. However, when you have multiple objects you need to think about all of them. Bullets have positions, velocity. Some bullets may have collision. Do you see how we're not thinking about an individual object any longer? We're thinking about all of them and are designing code a lot differently now.
I hope this helps answer your second part of the question. It's not about whether you need to update each enemy or not, it's about the most efficient way to update them. And while designing only your enemies using DOD may not help gain much performance, designing the entire game around these principles (only where applicable!) may lead to a lot of performance gains!
So onto the first part of the question, that is other examples of DOD. I'm sorry but I don't have that much there. There is one really good example though, I came across this some time ago, a series on data oriented design of a behavior tree by Bjoern Knafla: http://bjoernknafla.com/data-oriented-behavior-tree-overview You probably want to start at the first one in the series of 4, links are in the article itself.
Hope this still helps, despite the old question. Or maybe some other SO user come across this question and have some use from this answer.
I read the question you linked to and the article.
I've read one book on the subject of data driven design.
I'm pretty much in the same boat as you.
The way I understand Noel's article is that you design your game in the typical object oriented way. You have classes and methods that work on the classes.
After you've done your design, you ask yourself the following question:
How can I arrange all of the data I've designed in one huge blob?
Think of it in terms of writing your entire design as one functional method, with lots of subordinate methods. It reminds me of the massive 500,000 line Cobol programs of my youth.
Now, you probably won't write the entire game as one huge functional method. Really, in the article, Noel is talking about the rendering portion of a game. Think of it as a game engine (the one huge functional method) and the code to drive the game engine (the OOP code).
What I'm especially interested in is the mantra "where there's one there are many", which I can't really seem to connect with the rest here. Yes, there are always more than one enemy, yet, you still need to update each enemy individually, cause they're not moving the same way now are they?
You're thinking in terms of objects. Try thinking in terms of functionality.
Each enemy update is an iteration of a loop.
What's important is that the enemy data is structured to be in one memory location, rather than spread across enemy object instantiations.
I work in a medium sized team and I run into these painfully large class files on a regular basis. My first tendency is to go at them with a knife, but that usually just makes matters worse and puts me into a bad state of mind.
For example, imagine you were just given a windows service to work on. Now there is a bug in this service and you need to figure out what the service does before you can have any hope of fixing it. You open the service up and see that someone decided to just use one file for everything. Start method is in there, Stop method, Timers, all the handling and functionality. I am talking thousands of lines of code. Methods under a hundred lines of code are rare.
Now assuming you cannot rewrite the entire class and these god classes are just going to keep popping up, what is the best way to deal with them? Where do you start? What do you try to accomplish first? How do you deal with this kind of thing and not just want to get all stabby.
If you have some strategy just to keep your temper in check, that is welcome as well.
Tips Thus Far:
Establish test coverage
Code folding
Reorganize existing methods
Document behavior as discovered
Aim for incremental improvement
Edit:
Charles Conway recommend a podcast which turned out to be very helpful. link
Michael Feathers (guy in the podcast) begins with the premise that were are too afraid to simply take a project out of source control and just play with it directly and then throw away the changes. I can say that I am guilty of this.
He essentially said to take the item you want to learn more about and just start pulling it apart. Discover it's dependencies and then break them. Follow it through everywhere it goes.
Great Tip
Take the large class that is used elsewhere and have it implement an emtpy interface. Then take the code using the class and have it instantiate the interface instead. This will give you a complete list of all the dependencies to that large class in your code.
Ouch! Sounds like the place I use to work.
Take a look at Working effectivly with legacy code. It has some gems on how to deal with atrocious code.
DotNetRocks recently did a show on working with legacy code. There is no magic pill that is going to make it work.
The best advice I've heard is start incrementally wrapping the code in tests.
That reminds me of my current job and when I first joined. They didn't let me re-write anything because I had the same argument, "These classes are so big and poorly written! no one could possibly understand them let alone add new functionality to them."
So the first thing I would do is to make sure there are comprehensive testing behind the areas that you're looking to change. And at least then you will have a chance of changing the code and not having (too many) arguments (hopefully). And by tests, I mean testing the components functionally with integration or acceptance tests and making sure it is 100% covered. If the tests are good, then you should be able to confidently change the code by splitting up the big class into smaller ones, getting rid of duplication etc etc
Even if you cannot refactor the file, try to reorganize it. Move methods/functions so that they are at least organized within the file logically. Then put in lots of comments explaining each section. No, you haven't rewritten the program, but at least now you can read it properly, and the next time you have to work on the file, you'll have lots of comments, written by you (which hopefully means that you will be able to understand them) which will help you deal with the program.
Code Folding can help.
If you can move stuff around within the giant class and organize it in a somewhat logical way, then you can put folds around various blocks.
Hide everthing, and you're back to a C paradigm, except with folds rather than separate files.
I've come across this situation as well.
Personally I print out (yeah, it can be a lot of pages) the code first. Then I draw a box around sections of code that are not part of any "main-loop" or are just helper functions and make sure I understand these things first. The reason is they are probably referred to many times within the main body of the class and it's good to know what they do
Second, I identify the main algorithm(s) and decompose them into their parts using a numbering system that alternates between numbers and letters (it's ugly but works well for me). For example you could be looking at part of an algorithm 4 "levels" deep and the numbering would be 1.b.3.e or some other god awful thing. Note that when I say levels, I am not referring directly to control blocks or scope necessarily, but where I have identified steps and sub-steps of an algorithm.
Then it's a matter of just reading and re-reading the algorithm. When you start out it sounds like a lot of time, but I find that doing this develops a natural ability to comprehend a great deal of logic all at once. Also, if you discover an error attributed to this code, having visually broken it down on paper ahead of time helps you "navigate" the code later, since you have a sort of map of it in your head already.
If your bosses don't think you understand something until you have some form of UML describing it, a UML sequence diagram could help here if you pretend the sub-step levels are different "classes" represented horizontally, and start-to-finish is represented vertically from top-to-bottom.
I feel your pain. I tackled something like this once for a hobby project involving processing digital TV data on my computer. A fellow on a hardware forum had written an amazing tool for recording shows, seeing everything that was on, and more. Plus, he had done incredibly vital work of working around bugs in real broadcast signals that were in violation of the standard. He'd done amazing work with thread scheduling to be sure that no matter what, you wouldn't lose those real-time packets: on an old Pentium, he could record four streams simultaneously while also playing Doom and never lose a package. In short, this code incorporated a ton of great knowledge. I was hoping to take some pieces and incorporate them into my own project.
I got the source code. One file, 22,000 lines of C, no abstraction. I spent hours reading it; there was all this great work, but it was all done badly. I was not able to reuse a single line or even a single idea.
I'm not sure what the moral of the story is, but if I had been forced to use this stuff at work, I would have begged permission to chip pieces off it one at a time, build unit tests for each piece, and eventually grow a new, sensible thing out of the pieces. This approach is a bit different than trying to refactor and maintain a large brick in place, but I would rather have left the legacy code untouched and tried to bring up a new system in parallel.
The first thing I would do is write some unit tests to box the current behavior, assuming that there are none already. Then I'd start in the area where I need to make the change and try to get that method cleaned up -- i.e. refactor working code before introducing changes. Use common refactoring techniques to extract and reuse methods from existing long methods to make them more understandable. When you extract a method, look for other places in the code where similar code exists, box that area, and reuse the method you've just extracted.
Look for groups of methods that "hang together" that can be broken out into their own classes. Write some tests for how those classes should work, build the classes using the existing code as a template if need be, then substitute the new classes into the existing code, removing the methods that they replace. Again, using your tests to make sure that you're not breaking anything.
Make enough improvement to the existing code so that you feel you can implement your new feature/fix in a clean way. Then write the tests for the new feature/fix and implement to pass the tests. Don't feel that you have to fix everything the first time. Aim for gradual improvement, but always leave the code better than you found it.
Say for instance I'm going to do some seat of my pants coding adding a feature to an enterprise app. What are some good examples/tenants/cardinal rules a person can follow for making a fairly complex setup/config screen not look like feet.
What I'm looking for is along the lines of "Don't put one thing in a group box". But I'd also like some help with symmetry if anyone knows what layouts are most likely to achieve a relative amount of good looks that would be helpful.
Here's a cardinal rule you asked for: line up the controls vertically /horizontally and equally space the various related elements. And use correct spelling on your labels!
We've all come across screens where there are misaligned controls (even a couple pixels is noticeable) or misspelling on labels. When this happens to me I can't help but subconsciously look for other mistakes, plus it decreases my confidence in the application I'm using!
This is actually a huge topic. I frequently go to the Microsoft UX Guide for reminders on how to do this.
Some basics:
Make your app read like a book: left
to right, top to bottom
Use goal-oriented language instead of
technology oriented language
Not a cardinal rule but a great resource:
Apple UI Guidelines (good info for any OS)
EDIT: Re: achieving symmetry - things don't have to be perfectly symmetrical, but you want a feel of balance. Take a step back and get a sense of whether the page or form feels like it's leaning/falling to the left or right.
E.g., with stackoverflow, the main content is to the left, but it's nicely balanced by the extra stuff on the right.
I find that paper is my friend. I like to write out a list of objectives the form has to accomplish, and then sketch the form by hand, labeling the parts. Drawing it out lets me get away from making sure it looks perfect and that everything is aligned just right, and lets me focus on making sure that all the components I need are placed, hopefully somewhere logically. It also forces me to lay out the UI twice, so by the time I open my UI designer, I've already designed the form once and you hopefully know what I am doing
Some basic rules for you.
Try to make effective use of whitespace. Don't cram everything together in an effort to get as much stuff on screen as possible. This will make grouped controls more clear and text more legible.
Basic typography. Limit your use of fonts to 1 or 2. Don't use bold too much or it loses its emphasis.
The same goes for colours. Don't use too many, the fewer the better most of the time.
Don't just use icons to save space. Tiny icons with no explanation are useless.
Copy. Not wholesale of course, but if you are not well-versed in UI design yourself, it makes sense to take elements of interfaces you know work and apply them in your own designs.
Be clear about the purpose of the interface. How does it fit within the broader application for example? And what are the specific objectives you are trying to satisfy with it?
Get people to test it for you, early and often. I don't know what setup you are working with, or what kind of organisation you are in, but getting some kind of human feedback on your work will always be helpful, even if you lack the time and expertise to conduct proper usability evaluations.
Since you use the term, "seat of your pants," I'm assuming that you don't want to spend too much time on the UI. If you are willing to devote some time to the UI, you may want to look into custom control or UI development that will suit your situation. Like Firefox's Options UI or the .NET project properties in Visual Studio 2008.
If you are looking for something using standard controls, it is probably best to separate out different sections of related items into tabs or some other type of stacking control (i.e. Ribbon control). A good example of the tabbed version would be the Notepad++ Preferences UI. Many other programs use a similar scheme.
The best way to get a UI that makes sense is to follow Joel's advice:
Eat your own dog food.
Do it a few times to your own UI, and you'll notice some things you didnt think of intially.
I've found that a really good test is getting someone non-technical to use your GUI. Watching someone use it for 5-10mins normally gives me a very good idea about what is/isn't easier to understand.
This series by Joel Spolsky is a pretty good read and Jakob Nielsen's stuff Usability and Web Design is pretty useful.
Specific rules I try and use are:
Put items in logical groups
Line everything up
Use sensible images/icons
Spend 5-10 mins thinking through why things are the way there are
Only use words that make sense to the user not to you!
Start from the setup/config UI of an existing application that you feel is both simple and usable.
Most tenants/cardinal rules apply to UI in general and fill hundreds and hundreds of pages in UI design and HCI books, so you probably want to just work your way by example for now, while trying to capitalize on existing user experience (habits), i.e. obeying the rule of "least surprise": e.g. if your application is a Windows application, use the Installation Wizard pattern, if it's an ncurses app for a particular flavor of *nix follow the style of that particular OS's actual installation UI, etc.
You might be interested in the book "Don't Make Me Think," (author's web site) or "About Face 3.0". Both come highly recommended for reading about how to design interfaces.