Moving in an actor inside system boundary of UML usecase diagram - performance

So my question is as follows, is it possible to move an actor inside the system boundary of a use case diagram? Can it be a part of the system.
I set a server as an actor, in where a customer interacts with the server in an e-commerce environment. Is it possible or should I move the server inside of the system? Since the server is a part of the system that the customer is interacting with.
This server is most likely then going to be used by an admin role.

TL;DR
No, you can't do that, unless you model only a part of the system.
Explanation
By definition an actor is external to the system. It can be a user, other system or a sensor.
If you want to show a system decomposition into smaller parts, use component diagram.
Note, the role of a use case diagram is to show functions of the system as a whole.
On the other hand you may depict just one part of the system (ie. system tier). In such case other parts (tiers) are external to the modeled system part under consideration.

I suppose you mean "move an actor inside the system boundary" since in any case the actor appears inside the UC diagram (or you just won't see it).
You can do that. However, it would be rather pointless since actors are meant to interact with the system under consideration (SUC) from outside. The only case where you can do that is, when you create sub-systems (that is you have boundaries of sub-systems within the SUC boundary). I wouldn't do that either from the very beginning. Only in a later design phase you could introduce such a construct. In that case you'd have independent teams working on the different sub-systems and one on the integration for the SUC. For "normally" sized systems you should leave these sub-systems just away and focus on actors and their UCs inside the SUC boundary.

Related

How to draw sequence diagram for multiple-stages use case?

In current system, some functionalities/ use cases may need multiple stages to complete their whole procedure. Examples are like user registration which may contain two stages: inserting user record on database and email activation. Or for money transfer on netbank, it requires the stage of filling transfer information and of SMS verification before the actual transaction happens. I want to know how to draw sequence diagram for such kind of functionalities/ use cases.
To be more specific, I have drawn a sequence diagram for the user registration example using MVC pattern, which is shown on the bottom. On this diagram, the two stages is divided by the red line. Also I skip the logic that handles failures or exceptions like invalid input or username already existing in the database to make it simpler. It seems that there are so many objects involved in this diagram, should I draw a sequence diagram this way? Or using 2 sequence diagram for each stage is better?
Generally speaking the smaller diagram, the better. If you are doing it for documentation / informing purpose then clarity should be your priority.
a) breaking it up
In your particular example, breaking it into (at least) two sequence diagrams would be a good option, as both those activities are usually performed at different times.
The way you could look at this is:
first diagram takes as an input unregistered user and produces a unverified user
second diagram takes as an input unverified user and produces a verified user
That way you wouldn't need to worry about connecting the two diagrams as they are really not that dependent.
b) using interaction overview
That being said, UML has interaction overview diagrams (http://www.uml-diagrams.org/interaction-overview-diagrams.html), that allow you sow together various diagrams, so you could split it into two, and then just connect them differently. Or you could use state machine diagram to track the mutation of the user — the UML vocabularly is rather large, so no need to limit yourself just to one type.

Implementing a model written in a Predicate Calculus into ProLog, how do I start?

I have four sets of algorithms that I want to set up as modules but I need all algorithms executed at the same time within each module, I'm a complete noob and have no programming experience. I do however, know how to prove my models are decidable and have already done so (I know Applied Logic).
The models are sensory parsers. I know how to create the state-spaces for the modules but I don't know how to program driver access into ProLog for my web cam (I have a Toshiba Satellite Laptop with a built in web cam). I also don't know how to link the input from the web cam to the variables in the algorithms I've written. The variables I use, when combined and identified with functions, are set to identify unknown input using a probabilistic, database search for best match after a breadth first search. The parsers aren't holistic, which is why I want to run them either in parallel or as needed.
How should I go about this?
I also don't know how to link the
input from the web cam to the
variables in the algorithms I've
written.
I think the most common way for this is to use the machine learning approach: first calculate features from your video stream (like position of color blobs, optical flow, amount of green in image, whatever you like). Then you use supervised learning on labeled data to train models like HMMs, SVMs, ANNs to recognize the labels from the features. The labels are usually higher level things like faces, a smile or waving hands.
Depending on the nature of your "variables", they may already be covered on the feature-level, i.e. they can be computed from the data in a known way. If this is the case you can get away without training/learning.

POS UI design & development: what should be included & avoided? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm having to design & develop UI for a Point of Sale (POS) system.
There are obvious features that need to be included, like product selection & quantity, payment method, tender amount, user login (as many users will use one terminal), etc.
My question is related more towards the UI design aspect of developing this system.
How should UI features/controls be positioned, sized?
Is there a preferred layout?
Are their colours I should be avoiding?
If you know of any resources to guide me, that would also help.
The reason this is critical to me as I am aware of the pressurized environment in which POS systems are used & I want to make the process as (i) quick, (ii) simple to use and (iii) result driven as possible for the user to service customers.
All answers, info & suggestions welcome.
Thanks.
P.s. If you could mention the "playoff" between controls that would also be appreciated (i.e. if touch screen a keypad control is provided, but if also supporting keyboard & mouse input how do you manage the keypad & UI space effectively?)
A couple of thoughts from a couple of projects I've worked with:
For the touch screen ensure that each button can be pressed by someone with "fat fingers" as easily as smaller ones (some layouts encourage the use of thumbs at particular locations). Also highlight each button when it is pressed (with a slow-ish fade if you have spare CPU cycles).
Bigger grids are better than smaller ones. The numeric pad should always be in the same place (often the bottom right). The Enter/Tender/etc. "transaction" keys should be bigger than the individual numeric keys - (1) make it more obvious where it is, (2) it will be pressed more often than other screen areas and will wear out (a bigger area will last longer on average; this was more important with older style touch screens; newer technology is more resilient).
Allow functions/SKUs to be reassigned to different grid positions; the layout that works well for one store will likely be wrong for a slightly different one.
Group related functions by colour, but use excellent contrasts. Make sure that the fore/back combination looks good at all angles (some LCDs "bleed" colours left-to-right and/or top-to-bottom angles).
Positive touch screen feedback with sounds needs to have configurable volume and sound sets. Muted tones might be better in an quieter upmarket store, but "perky" sounds are better in a clothing store with louder background music/noise, etc.
Allow the grid size to be specified in percentages or "grid-block units" instead of pixels and draw everything with vectors, etc. since some hardware combinations may have LCDs with better resolution. (One system I worked on was originally specified as 640x480 but shipped at 1280x1024, so my design pre-planning saved a lot of rework later.)
And of course look at the ready-made solutions first (especially if you can get demo software/hardware for evaluation). Although they can be expensive they've often implemented a lot of things that'll you'll have to work through later, and may be cheaper in the long run, even after creating custom add-ons for your system.
Also:
Our UI supported a normal keyboard/mouse combo too (the touchable buttons were just standard button controls sized appropriately). If you pressed a number key it would trigger the same event as clicking the screen-pad button; other hotkeys were mapped to often used button commands (Enter, etc).
If run on a non-POS desktop (e.g. backoffice) the window could be resized too (the "POS desktop" maintained the same aspect ratio, adding dead space at the sides if needed). A standard top menu was available for additional administrative tasks, reporting, etc.
The design allowed everyone to build and test the UI before the associated hardware was finalized. And standard UI testing tools would work too.
Even More:
Our barcode scanners were serial/USB rather than keyboard-like, so each packet from the device raised a comms event. The selected "scanner type" driver class used the most secure formatting that the device could give us - some can supply prefix, suffix and/or checksum characters if programmed correctly - and then stripped this before handing the code up to the application.
The system made a "bzzzt" noise when the barcode couldn't be used (e.g. while cash drawer is open).
This design also avoided the need to set the keyboard focus to a specific entry area.
A tip: if the user is manually entering a barcode via the keypad, and hasn't completed it by pressing Enter, and then attempts to scan another barcode, it should beep instead, so the user can accept or cancel the pending item first.
Aggregated POS Design Guidelines
Based on the above and other literature, here is my list of guidelines for POS design.
[it would be nice if we grew this list further]
User Performance Priorities (in order):
efficiency (least time to transaction conclusion)
effectiveness (accurate info & output)
user satisfaction (based on first 2 in work context)
learning time (reduce time to learn system by making it simple)
GUIDELINES
Flexible Transaction Building - don't force a sequence to transaction wher possible. Place product orders in any order & allow them to be changed to a point.
Optimise Transaction Rate - allow a user to complete a transaction as quickly as possible (least clicks are not really the issue as more clicks could mean larger value of transaction, which makes business sense)
Support Handedness / Dexterity - most users have a dominant hand and a weaker hand in terms of dexterity. Allow the UI to be customised (on a single click) for handedness. my example: a L->R / R->L toggle button which moves easy features like "OK", "Cancel" in nearer proximity to weaker hand.
Constant Feedback - provide snapshot feedback which describes current state of the transaction and calculated result of transaction (NB: accounts) before & after committing a transaction.
Control "Volume" - control volume refers to the colour saturation/contrast, prominence of positioning and size of a control. Design more frequently used controls to have larger "volumes" relative to less frequently used controls. e.g. "Pay" button larger than "cancel" button. E.g. High contrast & greater colour saturation increase volume.
Target Findability - finding & selecting targets (item, numeric key) is key to efficiency. Group related controls (close proximity), place controls on screen edges (screen edge traps pointer), emphasise control amplitude (this dimension emphasises users normal plain of motion) and colour coding make finding & selecting targets more efficient.
Avoid Clutter - too many options limits control volume and reduces findability.
Use Plain Text - avoid abbreviations as much as possible (only use standard abbreviations e.g. size: S, M, L, etc.). This is especially true for product lookup.
Product Lookup - support shortcuts for regular orders (i.e. burger meal), categorised browsing & item name search (least ordered items). Consider include a special item: this is any item where the user types what is wanted (i.e. specific whiskey order) - this requires pricing though.
Avoid User Burden - the user should be able to read answers to customer questions from the UI. So provide regularly requested/prioritised feedback for transaction (i.e. customer asks: "what will be the the outstanding balance on my account if I buy this item?" It should appear in UI already)
Conversational Ordering - customer drives the ordering not the system. So allows item selection to be non-sequential.
Objective Focused - the purpose of POS is to conclude the transaction from a business perspective. Always make transaction conclusion possible immediately with "Pay" button. If clicked, any incomplete items will be un-done: user then read order back before requesting cash/credit card)
Personas - there are different categories (personas) of users of POS systems like (i) Clerk/Cashier and (ii) Manager. The UI should present the relevant options to that logged-in persona according to these guidelines i.e. Cashier: large volume on transaction building controls; Manager: large volume on transaction/user management controls.
Touch Screens - (i) allow for touch input with generally larger controls to supported a large finger tip as pointer. (ii) Provide proprioceptive feedback - this is feedback that indicate the control pushed (it should have a short delay on it fade: user finger will be in the way initially). (iii) Auditory Feedback (optional) - this helps with feedback especially with regards errors in pressurised environment.
User Training - users must be trained to understand business protocol & how the POS supports that protocol. They are the one's driving the system. Also, speak to POS users to design & enhance your system - again they are experienced users of the POS system
Context Analysis - a thorough analysis of the context of use for your POS system should be performed to best implement the POS heuristics mentioned above effectively. Understanding the user (human factors), the tasks (frequency, duration, stress factors, etc.) and environment (lighting, hardware, space layout, etc.) should be comprehensively conducted during design and should not be assumed. Get your hands dirty & get into the users work space!! That way you can develop something your specific users can use effectively, efficiently and satisfactorily
I hope this helps everyone.
To all respondents, I really appreciate your feedback! Please give me more wrt to this answer. Thanks
I ran across this question, and I thought I'd add my two cents since some of my work has been mentioned here.
I agree with most of what's been said, but it's important to remember that most everything mentioned represents heuristics. That means that while they're good principles to follow, there are likely times when (a) specific rules should be broken, and (b) there will be contradictions between rules. The trick is being able to weigh conflicting principles and apply them to the appropriate degrees (as you noted in a previous comment).
In the end, it's a matter of balancing the business requirements and user needs in a way that produces optimal results. And in the real world, I find that this can never be achieved through heuristics alone.
Here's an example: I recently finished POS designs for Subway, Wendy's, and Starbucks (see Case Studies at POSDesigns.com). All of these designs used solid heuristics, but all of them came out very, very different because of differences in the business goals and requirements, the users' needs and background, the environment in which they work, the technology being used, and a whole host of other differences.
You can never create a great design in a vacuum. For each of the clients mentioned above, I traveled around to a lot of different types of stores in multiple countries to get a feel for how the users' worked, how the systems would be used, how customers ordered, etc. All this information - along with sales and other data provided by the company - was invaluable in creating a highly usable solution.
Here's another example: Guideline #3 you provide previously ("Support Handedness / Dexterity") is fine as a heuristic (though I have to say that I question the conclusion of swapping simply OK/Cancel). But in visiting Subway stores, we discovered that in that context, the location of the register actually plays a greater role in the hand employees prefer.
In other words, registers that were squished against a wall on the right side tended to produce left-handed users, even when the users were right-handed for every other task. This had implications for the way we allowed the UI to be flip-flopped...and who had control of it. There are tons of examples like that, but we never could have achieved the gains that the user interfaces have produced - like 90% reduction in voids, near zero training, increased speed, accuracy, and check sizes, etc. - by following heuristics alone.
One more point (sorry...you've got me going now :-). Many times heuristics are incomplete without more data as to how to apply them. Consider your guideline #11, "Conversational Ordering". There's much more to this guideline than just providing flexibility in order entry. For instance, one of the many things you have to consider is that not all paths should be presented as equally probable.
We analyzed the way Starbucks' customers ordered in various locations across the United States and United Kingdom. Then, we optimized the system for the most commonly spoken patterns. If we had allowed all paths to have the same "volume", we would have sacrificed usability in other areas, since the design would have appeared more cluttered. The new POS system now supports almost all possible order patterns, but the most probable paths are presented at a higher "volume" than those that are less probable.
OK, it turned out to be more than two cents, but the bottom line is this: If you have a chance to visit the environments in which your POS will be used, analyze customer/employee interactions, etc. ...you should take it. Contextual observations and analysis are invaluable in correctly applying heuristics to your situation.
Good luck!
Dr. Kevin Scoresby
FYI - I'd enjoy talking further about this if you or anyone else in the group would like. My office phone number is on my "About Us" page at POSDesigns.com, or you can use the form to initiate an email conversation. Feel free to call anytime during business hours U.S., East Coast Time.
Devstuff already provides some great answers. In addition:
Create a prototype design (can be simply color-printed on paper) and test this in a scenario that is as realistic as possible, i.e. in a store, with a real future user. Enact a few common scenarios and ask the user to really 'use' your prototype as he / she would use the final product. Obtain feedback through interview and observation.
One way to evaluate your design is to check if you have applied the CRAP principles of design. This article discusses how this can relate to user interface design.
In addition to what has already been posted, here are some tips we picked up along the way.
We use two distinct UI's, one for touch-screen with large bold buttons and one for mouse/keyboard entry. the code behind them is the same just the layout is different.
For touch screens
Try not to have pop-up messages that take focus away from the main form, as users may not be looking at the screen, for example if they are chatting with the customer. we found that if this happen users will continues scanning products unaware that they are not been entered into the sale.
If using a bar code scanner be aware that they sometimes send an enter key after the bar code, that will active focused controls (saying yes/no to pop-ups). To help prevent this we disable the enter key-press on buttons, so only a mouse/finger press will fire the click event. we also turn tab stop to false (may be called different in you language), to stop controls that are touch only from getting focus.
As far as colours go we try to stick to bold button and font colours that can easily be distinguished/read in poorly lit rooms and on screens with glare, as most times users are not in the position to move the screen should they have problem reading it.
Anything you can do to speed up/ help the user is a good thing, for example on our payment screen, as well as having 0..9 keys for payment entry, we also have £1,£2,£5,£10 etc so users don't have to add up the money they are given, they can just press the key for each coin/note they received from the customer.
The best tip I can give is to remember that you are designing for a completely different environment form a desktop application, that would be used in an office. and that users may of never used a computer before. since POS systems are usually locked down, try to make it as easy to use out of the box as possible.
another thing to consider is personas (as introduced in cooper's "the inmates are running the asylum").
essentially, you make up a few canonical "users". give them names, hobbies, skills, a picture, and use them as the people you are designing for.
ie:
billy the cashier: has some computer experience (playing on his ps2). he's in high school, may go to community college. he's a primary user of the system, and wants to be able to learn the new system quickly.
cyrus the manager: needs to manage the cashiers. needs a way, with only his authorization, void transactions and be able to review logs of the sales for making reports as well as managing "shrinkage" (theft). he has 2 kids, lives out in the suburbs so has a 45 minute commute; therefore he doesn't want to spend extra time wrangling the system.
you may need three or four personas; any more than that and it becomes hard to design for.
I highly recommend the book "inmates are running the asylum", plus cooper wrote another book: "about face"; which I have yet to read.
good luck!
I would recommend doing some sort of usability survey amongst your current user group. There is no need for this to be a complicated or highly scientific survey. Present them with simple questions to determine:
The users priority when using the system (accuracy of task, speed, aesthetics)
Preferred input devices
Work flow through such a system
Level of education and domain knowledge
I have found that a lot can be learnt from a simple survey like this and can be applied to your UI design to ensure that the users usability experience is satisfactory.
Great comments from everyone else. I'll just add that there's also an article by Dr. Kevin Scoresby titled "How to Design a (POS) System that Everybody Hates" that discusses usability of POS systems and adds a few points to what people have already mentioned, such as:
Don't punish the employee for customer choices
Avoid creating conflicts with the real world
Avoid color coding (1 out of 10 males has a form of color blindness)
I've also discovered lots of helpful POS design tips at POSDesigns.com. One thing I found interesting is that by focusing too much on the number of button presses, you can actually impact speed--which is often a primary goal. There's also a tip titled "Five Factors that Influence Speed" that I found helpful.
Good luck!
Kyle
There are already some really good systems out there i.e. Tabtill for Win 8 http://www.tabtill.com or Shopkeep for iOS http://www.shopkeep.com. The fewest number of clicks your user need to do the better. As I am also involved in coding for such solutions and having worked with clients using various POS systems, some can be really frustrating. Remember watching cashiers in a bar tapping 10 times just to take payment for a couple of items, their fingers are hopelessly hovering over the screen trying to find the right colourful button. Keep it simple! Allow sorting of your visible product range, categorize them or use barcode reader. Keep at least 5% gap between buttons and don't let silly animations slow down your UI. Either invent your own or just copy what is already out there with your own twist.

What are pro/cons of push/pull data flow models?

I've been developing an in-house DSP application (Java w/ hooks for Groovy/Jython/JRuby, plugins via OSGi, plenty of JNI to go around) in data flow/diagram style, similar to pure data and simulink. My current design is a push model. The user interacts with some source component causing it to push data onto the next component and so on until a end block (typically a display or file writer). There are some unique challenges with this design specifically when a component starves for input. There is no easy way to request more input. I have mitiated some of this with feedback control flow, ex an FFT block can broadcast that it needs more data to source block of it's chain. I've contemplated adding support for components to be either push/pull/both.
I'm looking for responses regarding the merits of push vs pull vs both/hybrid. Have you done this before? What are some of the "gotchas"? How did you handle them? Is there a better solution for this problem?
Some experience with a "mostly-pull" approach in a large-scale product:
Model: Nodes build a 1:N tree, i.e. each component (except the root) has 1 parent and 1..N children. Data flows almost exclusively from parent to children. Change notifications can originate from any node in the tree.
Implementation: All leafs are notified with the sending node's id and a "generation" counter. Leafs know which node path they depend on, so they know if they need to update. (Any other child node update algorithm would do, too, and might have been better in hindsight).
Leafs query their parent for current data, query bubbles up recursively. The generation counter is included, so the bubble-up stops at the originating node.
Advantages:
parent nodes don't need much/any information about their children. Data can be consumed by anyone - this allowed a generic approach to implementing some (initially not expected) non-UI functionality on top of the data intended for display
Child nodes can aggregate and delay updates (avoiding repaints sure beats fast painting)
inactive leafs do cause no data traffic at all
Disadvantages:
Incremental updates are expensive, as full data is published.
The implementation actually allows for different data packets to be requested (and
the generation counter could prevent unecessary data traffic), but the data packets initially designed are very large. Slicing them was an afterthought, but works ok.
You need a real good generation mechanism. The one initially implemented collided with initial updates (that need special handling - see "incremental updates") and aggregation of updates
the need for data travelling up the tree was greatly underestimated.
publish is cheap only when the node offers read-only access to current data. This might require additional update synchronization, though
sometimes you want intermediate nodes to update, even when all leafs are inactive
some leafs ended up implementing polling, some base nodes ended up relying on that. ugly.
Generally:
Data-Pull "feels" more native to me when data and processing layer should know nothing about the UI. However, it requires a complex change notificatin mechanism to avoid "Updating the universe".
Data-Push simplifies incremental updates, but only if the sender intimately knows the receiver.
I have no experience of similar scale using other models, so I can't really make a recommendation. Looking back, I see that I've mostly used pull, which was less of a hassle. It would be interesting to see other peoples experiences.
I work on a pure-pull image processing library. It's more geared to batch-style operations where we don't have to deal with dynamic inputs and for that it seems to work very well. Pull works especially well for large data sets and for threading: we scale linearly to at least 32 CPUs (depending on the graph being evaluated, of course, heh).
We have a GUI that allows leaves to be dynamic data sources (for example, a video camera delivering frames) and they are handled by throwing away and rebuilding the relevant parts of the graph on a change. This is cheap in our case, so the overhead isn't that high.

How can I make my applications scale well?

In general, what kinds of design decisions help an application scale well?
(Note: Having just learned about Big O Notation, I'm looking to gather more principles of programming here. I've attempted to explain Big O Notation by answering my own question below, but I want the community to improve both this question and the answers.)
Responses so far
1) Define scaling. Do you need to scale for lots of users, traffic, objects in a virtual environment?
2) Look at your algorithms. Will the amount of work they do scale linearly with the actual amount of work - i.e. number of items to loop through, number of users, etc?
3) Look at your hardware. Is your application designed such that you can run it on multiple machines if one can't keep up?
Secondary thoughts
1) Don't optimize too much too soon - test first. Maybe bottlenecks will happen in unforseen places.
2) Maybe the need to scale will not outpace Moore's Law, and maybe upgrading hardware will be cheaper than refactoring.
The only thing I would say is write your application so that it can be deployed on a cluster from the very start. Anything above that is a premature optimisation. Your first job should be getting enough users to have a scaling problem.
Build the code as simple as you can first, then profile the system second and optimise only when there is an obvious performance problem.
Often the figures from profiling your code are counter-intuitive; the bottle-necks tend to reside in modules you didn't think would be slow. Data is king when it comes to optimisation. If you optimise the parts you think will be slow, you will often optimise the wrong things.
Ok, so you've hit on a key point in using the "big O notation". That's one dimension that can certainly bite you in the rear if you're not paying attention. There are also other dimensions at play that some folks don't see through the "big O" glasses (but if you look closer they really are).
A simple example of that dimension is a database join. There are "best practices" in constructing, say, a left inner join which will help to make the sql execute more efficiently. If you break down the relational calculus or even look at an explain plan (Oracle) you can easily see which indexes are being used in which order and if any table scans or nested operations are occurring.
The concept of profiling is also key. You have to be instrumented thoroughly and at the right granularity across all the moving parts of the architecture in order to identify and fix any inefficiencies. Say for example you're building a 3-tier, multi-threaded, MVC2 web-based application with liberal use of AJAX and client side processing along with an OR Mapper between your app and the DB. A simplistic linear single request/response flow looks like:
browser -> web server -> app server -> DB -> app server -> XSLT -> web server -> browser JS engine execution & rendering
You should have some method for measuring performance (response times, throughput measured in "stuff per unit time", etc.) in each of those distinct areas, not only at the box and OS level (CPU, memory, disk i/o, etc.), but specific to each tier's service. So on the web server you'll need to know all the counters for the web server your're using. In the app tier, you'll need that plus visibility into whatever virtual machine you're using (jvm, clr, whatever). Most OR mappers manifest inside the virtual machine, so make sure you're paying attention to all the specifics if they're visible to you at that layer. Inside the DB, you'll need to know everything that's being executed and all the specific tuning parameters for your flavor of DB. If you have big bucks, BMC Patrol is a pretty good bet for most of it (with appropriate knowledge modules (KMs)). At the cheap end, you can certainly roll your own but your mileage will vary based on your depth of expertise.
Presuming everything is synchronous (no queue-based things going on that you need to wait for), there are tons of opportunities for performance and/or scalability issues. But since your post is about scalability, let's ignore the browser except for any remote XHR calls that will invoke another request/response from the web server.
So given this problem domain, what decisions could you make to help with scalability?
Connection handling. This is also bound to session management and authentication. That has to be as clean and lightweight as possible without compromising security. The metric is maximum connections per unit time.
Session failover at each tier. Necessary or not? We assume that each tier will be a cluster of boxes horizontally under some load balancing mechanism. Load balancing is typically very lightweight, but some implementations of session failover can be heavier than desired. Also whether you're running with sticky sessions can impact your options deeper in the architecture. You also have to decide whether to tie a web server to a specific app server or not. In the .NET remoting world, it's probably easier to tether them together. If you use the Microsoft stack, it may be more scalable to do 2-tier (skip the remoting), but you have to make a substantial security tradeoff. On the java side, I've always seen it at least 3-tier. No reason to do it otherwise.
Object hierarchy. Inside the app, you need the cleanest possible, lightest weight object structure possible. Only bring the data you need when you need it. Viciously excise any unnecessary or superfluous getting of data.
OR mapper inefficiencies. There is an impedance mismatch between object design and relational design. The many-to-many construct in an RDBMS is in direct conflict with object hierarchies (person.address vs. location.resident). The more complex your data structures, the less efficient your OR mapper will be. At some point you may have to cut bait in a one-off situation and do a more...uh...primitive data access approach (Stored Procedure + Data Access Layer) in order to squeeze more performance or scalability out of a particularly ugly module. Understand the cost involved and make it a conscious decision.
XSL transforms. XML is a wonderful, normalized mechanism for data transport, but man can it be a huge performance dog! Depending on how much data you're carrying around with you and which parser you choose and how complex your structure is, you could easily paint yourself into a very dark corner with XSLT. Yes, academically it's a brilliantly clean way of doing a presentation layer, but in the real world there can be catastrophic performance issues if you don't pay particular attention to this. I've seen a system consume over 30% of transaction time just in XSLT. Not pretty if you're trying to ramp up 4x the user base without buying additional boxes.
Can you buy your way out of a scalability jam? Absolutely. I've watched it happen more times than I'd like to admit. Moore's Law (as you already mentioned) is still valid today. Have some extra cash handy just in case.
Caching is a great tool to reduce the strain on the engine (increasing speed and throughput is a handy side-effect). It comes at a cost though in terms of memory footprint and complexity in invalidating the cache when it's stale. My decision would be to start completely clean and slowly add caching only where you decide it's useful to you. Too many times the complexities are underestimated and what started out as a way to fix performance problems turns out to cause functional problems. Also, back to the data usage comment. If you're creating gigabytes worth of objects every minute, it doesn't matter if you cache or not. You'll quickly max out your memory footprint and garbage collection will ruin your day. So I guess the takeaway is to make sure you understand exactly what's going on inside your virtual machine (object creation, destruction, GCs, etc.) so that you can make the best possible decisions.
Sorry for the verbosity. Just got rolling and forgot to look up. Hope some of this touches on the spirit of your inquiry and isn't too rudimentary a conversation.
Well there's this blog called High Scalibility that contains a lot of information on this topic. Some useful stuff.
Often the most effective way to do this is by a well thought through design where scaling is a part of it.
Decide what scaling actually means for your project. Is infinite amount of users, is it being able to handle a slashdotting on a website is it development-cycles?
Use this to focus your development efforts
Jeff and Joel discuss scaling in the Stack Overflow Podcast #19.
FWIW, most systems will scale most effectively by ignoring this until it's a problem- Moore's law is still holding, and unless your traffic is growing faster than Moore's law does, it's usually cheaper to just buy a bigger box (at $2 or $3K a pop) than to pay developers.
That said, the most important place to focus is your data tier; that is the hardest part of your application to scale out, as it usually needs to be authoritative, and clustered commercial databases are very expensive- the open source variations are usually very tricky to get right.
If you think there is a high likelihood that your application will need to scale, it may be intelligent to look into systems like memcached or map reduce relatively early in your development.
One good idea is to determine how much work each additional task creates. This can depend on how the algorithm is structured.
For example, imagine you have some virtual cars in a city. At any moment, you want each car to have a map showing where all the cars are.
One way to approach this would be:
for each car {
determine my position;
for each car {
add my position to this car's map;
}
}
This seems straightforward: look at the first car's position, add it to the map of every other car. Then look at the second car's position, add it to the map of every other car. Etc.
But there is a scalability problem. When there are 2 cars, this strategy takes 4 "add my position" steps; when there are 3 cars, it takes 9 steps. For each "position update," you have to cycle through the whole list of cars - and every car needs its position updated.
Ignoring how many other things must be done to each car (for example, it may take a fixed number of steps to calculate the position of an individual car), for N cars, it takes N2 "visits to cars" to run this algorithm. This is no problem when you've got 5 cars and 25 steps. But as you add cars, you will see the system bog down. 100 cars will take 10,000 steps, and 101 cars will take 10,201 steps!
A better approach would be to undo the nesting of the for loops.
for each car {
add my position to a list;
}
for each car {
give me an updated copy of the master list;
}
With this strategy, the number of steps is a multiple of N, not of N2. So 100 cars will take 100 times the work of 1 car - NOT 10,000 times the work.
This concept is sometimes expressed in "big O notation" - the number of steps needed are "big O of N" or "big O of N2."
Note that this concept is only concerned with scalability - not optimizing the number of steps for each car. Here we don't care if it takes 5 steps or 50 steps per car - the main thing is that N cars take (X * N) steps, not (X * N2).

Resources