Clever Objects, and Procedural to OOP mindset transition - ruby

In POODR, author Sandi Metz gives an example a Trip class, that needs to be prepared.
The Trip class defines a method prepare, which from the outside appears to say it can prepare it self,
but in fact it does so by internally asking some other object to prepare it, like bellow
class Trip
def initialize(preparers)
#preparers = preparers || []
end
def prepare
#preparers.each do |preparer|
preparer.prepare(self)
end
end
end
Given we never know of an object's internals, looking only from the outside this
seems a bit odd as it makes the object appear to be clever. I mean how can a trip
know how to prepare it self.
However, if we don't do this we go back to just having data structures
whose state is controlled by some outside function.
Thinking more of it, IRL I usually tell my baby brother "cut your hair", but in fact,
what happens is he goes to the barber shop, and has them cut his hair.
So the notion is not that weird and might apply to objects.
I can't put a name to this yet, but it makes me think I've been misunderstanding
messages for what an object can do, when in fact they're just what an object
can accept and respond to. We never know an object's internals so who
cares about how it goes about responding to messages.
Is this line of thinking correct?
Is this what it means by intent and implementation separation?
Or the analogy doesn't fall it line at all.
Cheers

If I send you a letter saying "please, compute 2+2 for me", and I get a letter back which says "4", I have no idea how you come up with that answer, you may have memorized it, you may have looked it up, you may have computed it by hand, you may have used a calculator, you may have sent a letter to someone else asking them to compute 2+2, you may have sent several letters to several other people asking them to compute sub-problems of 2+2, you may have read the letter and decided based on its contents to pass it on to a specialist, you may have not read the letter and just blindly passed it on to someone else without even opening it, or your secretary may have opened it and answered on your behalf without you even knowing about it.
That's the fundamental nature of messaging: I cannot, should not, and must not know how you were able to answer. The only thing I can observe is the answer. As long as the answer is correct, it doesn't matter to me how the answer was obtained. Maybe it was printed out, mailed to China, computed by hand in an office, then mailed back and scanned. I don't know and I don't care.
Is this line of thinking correct?
Yes.
Is this what it means by intent and implementation separation?
I am not familiar with that phrase. Is that something from the book? If yes, you should explain what it means in your question.
Or the analogy doesn't fall it line at all.
No, the messaging metaphor is a good one. It is the fundamental idea of OO. Classes are irrelevant, even objects are irrelevant. Messaging is what it's all about.

In my opinion, the problem is the lack of context / domain. If this Trip is used, for example, by the hotel / resort, then this is a mistake. If Trip exists in the minds of travelers, then this is a good design. For people, a trip is, in particular, a list of people traveling with them and the opportunity to notify them of the preparation for the trip.
I would solve this problem by more expressive naming of classes, modules / packages, etc.

Related

What is the process of discovering something that's been known called?

Usually, when I go back to my old code, I would find that there is room for improvement, so I would try to improve it. But most of the time, it dawns on me that my past self was already knowing that there was room for improvement but it's simply illogical to do because there would be an edge case bug. There is always reasoning that makes sense. So I would waste my time discovering that edge-case bug that my past self already knew. The edge-case bug simply eliminates the reasoning of the improvement completely, it's like you know the earth is round and not flat anymore.
When you thought you know better, so you try to come up with something smart, but it turns out that someone else already knows it and you just discovering the same thing later. What is this process of discovering called?
I want to use the word as a topic for the comment. E.g. "X: Don't try to change this code because of A, B, C, etc" I want to know what is X. It will also help me search for related topics later.
The exact word for discovering something again you'd since forgotten is "rediscover". So, if you want a noun, "rediscovery" would be appropriate. For your intended use, something more like "reminder" or possibly even "warning" or "todo" could work as well. Really, you want to prevent "rediscovery" by leaving this comment.

What is the maximum length a method or function should be?

A friend from work was arguing with me that it is ok for some methods to be 200 lines long, it can still be easy to understand.
And I was arguing that if you can split big methods, you should.
As a rule of thumb, no method should be, for example, bigger than 30 lines.
I've search a lot of places (at google of course) for a reasonable explanation in favor of each way, but couldn't find anything besides "the code will be more maintainable" or "divide and conquer", but no exact explanation of why.
Could anyone explain to me the concrete reasons as for why big methods are bad (or not)?
A method, function, or any piece of code is good if someone other than the author can understand it. If your methods start getting long, there is a good chance that some re-factoring or an OOP design principle can be employed to separate it into smaller chunks that have less total responsibility.
(I.e. Single Responsibility Principle)
Writing code is more art than science (sorry Comp. Sci. peeps...its true) I don't know of any compiler that will care if your code is long winded, but if you need to change something later on it will be a royal pain....trust me.
Regardless of what you do, there is always be another programmer with a theory or pattern to apply, but in the end you need to write code that works. No end user will ever compliment you on how elegant your code is, or on how wonderful you used of a design pattern...all they care about is does it work!
So, where is my advise...
Write code with lots of comments
Spend time reading about OOP patterns
Download and inspect others code often
If you have a method of 200 lines, and you give this code to someone else, that person might want to change it, but before he can do that he will have to go through 200 lines before he understands what to edit. It's just not efficient because it's not easily readable.
In practice, a 200 line method can always be split up in multiple methods (divide and conquer as you call it). The advantage of this, is that these methods can be re-used, and your method becomes more readable. Now if you need to edit that piece of code, you only need to edit it once, instead of having to copy your code twice.
That said, I think a maximum of 30 lines for a method is quite appropiate. But as a rule of thumb you should always ask yourself if parts of your method could be re-used, split up for readability purposes, or if you can use other methods to make your code smaller and therefore more readable. By having parameters in your methods you can make your method more flexible which is also a good thing, because you can re-use it in other methods.

What failure modes can TDD leave behind?

Please note I have not yet 'seen the light' on TDD nor truly got why it has all of the benefits evangelised by its main proponents. I'm not dismissing it - I just have my reservations which are probably born of ignorance. So by all means laugh at the questions below, so long as you can correct me :-)
Can using TDD leave yourself open to unintended side-effects of your implementation? The concept of "the least amount of code to satisfy a test" suggests thinking in the narrowest terms about a particular problem without necessarily contemplating the bigger picture.
I'm thinking of objects that hold or depend upon state (e.g. internal field values). If you have tests which instantiate an object in isolation, initialise that object and then call the method under test, how would you spot that a different method has left behind an invalid state that would adversely affect the behaviour of the first method? If I have understood matters correctly, then you shouldn't rely on order of test execution.
Other failures I can imagine cover the non-closure of streams, non-disposal of GDI+ objects and the like.
Is this even TDD's problem domain, or should integration and system testing catch such issues?
Thanks in anticipation....
Some of this is in the domain of TDD.
Dan North says there is no such thing as test-driven development; that what we're really doing is example-driven development, and the examples become regression tests only once the system under test has been implemented.
This means that as you are designing a piece of code, you consider example scenarios and set up tests for each of those cases. Those cases should include the possibility that data is not valid, without considering why the data might be invalid.
Something like closing a stream can and should absolutely be covered when practicing TDD.
We use constructs like functions not only to reduce duplication but to encapsulate functionality. We reduce side effects by maintaining that encapsulation. I'd argue that we consider the bigger picture from a design perspective, but when it comes to implementing a method, we should be able to narrow our focus to that scope -- that unit of functionality. When we start juggling externalities is when we are likely to introduce defects.
That's my take, anyway; others may see it differently.
TDD is not a replacement for being smart. The best programmers become even better with TDD. The worst programmers are still terrible.
The fact that you are asking these questions is a good sign: it means you're serious about doing programming well.
The concept of "the least amount of
code to satisfy a test" suggests
thinking in the narrowest terms about
a particular problem without
necessarily contemplating the bigger
picture.
It's easy to take that attitude, just like "I don't need to test this; I'm sure it just works." Both are naive.
This is really about taking small steps, not about calling it quits early. You're still going after a great final result, but along the way you are careful to justify and verify each bit of code you write, with a test.
The immediate goal of TDD is pretty narrow: "how can I be sure that the code I'm writing does what I intend it to do?" If you have other questions you want to answer (like, "will this go over well in Ghana?" and "is my program fast enough?") then you'll need different approaches to answer them.
I'm thinking of objects that hold or
depend upon state.
how would you spot that a different
method has left behind an invalid
state?
Dependencies and state are troublesome. They make for subtle bugs that appear at the worst times. They make refactoring and future enhancement harder. And they make unit testing infeasible.
Luckily, TDD is great at helping you produce code that isolates your logic from dependencies and state. That's the second "D" in "TDD".
The concept of "the least amount of
code to satisfy a test" suggests
thinking in the narrowest terms about
a particular problem without
necessarily contemplating the bigger
picture.
It suggests that, but that isn't what it means. What it means is powerful blinders for the moment. The bigger picture is there, but interferes with the immediate task at hand - so focus entirely on that immediate task, and then worry about what comes next. The big picture is present, is accounted for in TDD, but we suspend attention to it during the Red phase. So long as there is a failing test, our job is to get that test to pass. Once it, and all the other tests, are passing, then it's time to think about the big picture, to look at shortcomings, to anticipate new failure modes, new inputs - and write a test to express them. That puts us back into Red, and re-narrows our focus. Get the new test to pass, then set aside the blinders for the next step forward.
Yes, TDD gives us blinders. But it doesn't blind us.
Good questions.
Here's my two cents, based on my personal experience:
Can using TDD leave yourself open to
unintended side-effects of your
implementation?
Yes, it does. TDD is not a "fully-fledged" option. It should be used along with other techniques, and you should definitely bear in mind the big picture (whether you are responsible of it or not).
I'm thinking of objects that hold or
depend upon state (e.g. internal field
values). If you have tests which
instantiate an object in isolation,
initialise that object and then call
the method under test, how would you
spot that a different method has left
behind an invalid state that would
adversely affect the behaviour of the
first method? If I have understood
matters correctly, then you shouldn't
rely on order of test execution.
Every test method should execute with no regard of what was executed before, or will be executed after. If that's not the case then something's wrong (from a TDD perspective on things).
Talking about your example, when you write a test you should know with a reasonable detail what your inputs will be and what are the expected outputs. You start from a defined input, in a defined state, and you check for a desired output. You're not 100% guaranteed that the same method in another state will do it's job without errors. However the "unexpected" should be reduced to a minimum.
If you design the class you should definitely know if two methods can change some shared internal state and how; and more important, if this should really happen at all, or if there is a problem about low cohesion.
Anyway a good design at the "tdd" level doesn't necessarily means that your software is well Built, you need more as Uncle Bob explains well here:
http://blog.objectmentor.com/articles/2007/10/17/tdd-with-acceptance-tests-and-unit-tests
Martin Fowler wrote an interesting article about Mocks vs Stubs test which covers some of the topics you are talking about:
http://martinfowler.com/articles/mocksArentStubs.html#ClassicalAndMockistTesting

What does "prototyping" mean in practice?

When I recently asked about the uses of Ruby someone told me it was good for prototyping. I basically know what that means, quickly get the very base of your app up and working, see if there are conceptual problems and then add the rest.
Am I right with how I understand prototyping?
What would be a concrete example of prototyping a Snake game in Ruby or any other language?
Yep, prototyping serves as a proof of concept, to ensure what you want to build is feasible. Something might be left out in a prototype could be like exception handling, logging, etc.
A common mistake often made is teams switching from prototype to real code on the fly, i.e. just continuing on the so-called "prototype", except it just now becomes real code.
For many clients, describing what an application will do, or enumerating a set of requirements, is not enough for them to fully grasp how it will work. This leads to the infamous mid-project changes and scope creep. One way to mitigate this is to create a throwaway version that lets them see a "working" example of how the real application will operate.
It can often function as a proof-of-concept as well, but I think client communication is the more useful purpose of a prototype. In particular, you may want to do a prototype using a different technology -- Ruby/Rails, say, or pure Javascript -- than the final working application will use. If so, there's still proof-of-concept value in terms of the algorithms you're using, or the ways you may have to connect to other systems, but again, the actual code will all be thrown away.
So the part of your description I would disagree with is "add the rest" -- I'd throw the prototype out and start over.
Yes, that's a good basic description of prototyping. It's just getting the foundations working so that you know it can be done, and that it suits your needs.
An example of a Snake game prototype would be having a snake that you can move up down, left and right, eats something, maybe have one block in the middle of the board to maneuver around, and the snake growing when it eats something. So, you wouldn't have a splash screen, or keep track of high scores, or have different levels. Just the basics of the game.
Is "Good for prototyping" a polite way of saying "It's difficult to maintain"?

How to react when the client's response is negative on delivery? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am a junior programmer. Since my supervisor told me to sit in with the client, I joined. I saw the unsatisfied face of the client despite the successful (from my programmer's perspective) delivery of the project!
Client: You could have included this!
Us: Was not in the specification!
Client: Common Sense!
As a programmer, how do you respond in this situation?
What you should do to avoid this situation:
Explicitly spec out what will be included and what will not be included.
The problem probably comes down to the unspecified parts of the spec:
The client thinks that unspecified stuff should be in, i.e. it was implied.
The developer thinks that unspecified stuff should not be in.
For future specs that you have, you should have a catch all statement, that explicitly states that if something is not specified in this document, it can be done after the original specification is done at an additional cost.
What you should do in the current situation:
Other than learning from your experiences, you should come to some compromise with the client.
Example: I will do this feature that you feel is common sense, but for all future additions/changes it will have to be spec'ed out explicitly.
I.e. you will have to do a little more work, but it is worth it in return for the catch all explicitly spec'ed agreement your client will enter into.
Bad spec?
Was it necessarily a bad spec? No.
It is impossible to mention everything your clients may expect, so it is critical to have this catch all statement mentioned above stated clearly and explicitly in your spec/contract.
Other ways to reduce the problem:
Involve the client early, show them early prototypes. Even if they don't demand it.
Try not to sell the client an end product, but more of a service for working on his product.
Consider an agile development model or something similar so that tasks are well defined, small, paid for, and indisputable.
This would be one of many reasons why I switched to an Agile development philosophy. The only way, in my opinion, to successfully avoid this scenario is to either be omniscient or involve the customer heavily and release early/release often to get feedback as soon as possible. That way you can develop the software the customer really wants, not the software the customer tells you they want.
Client: You could have included this!
Us: Was not in the specification!
Client: Common Sense!!
Us: We do not attempt to go beyond what the client has specified - we follow the specification. It's as important to NOT implement features not specified as it is to implement features specified. We will never second guess our customers, who value the fact that they can completely depend on us to correctly and completely implement the specification on time and under budget.
As others very rightly point out, the situation is almost always more complex than the simple exchange I've described above.
However, the above is valid if the implementer has a specification with the customer's signature on it which essentially implements an agreement that says "once the software provably implements all the features in the spec then it is considered complete", and anything additional is outside the specification and therefore outside the contract.
The contract itself may have some input here as well - if you don't have a signed contract than it doesn't matter what's in the spec - everything so far has been done on a handshake, and the entire deal (including payment) can go down the toilet based on any dissatisfaction on either side.
But if you have a contract and a specification, and the customer has seen and signed both, then they have no wriggle room to ask you to go further.
Now, as to the question of whether you should implement it:
AWESOME! You delivered a product and they only had one complaint. Implement the feature, call it a 'freebie (make sure they understand you're working outside the spec and contract and explicitly send them a bill for the work with the discount shown in dollars) and have them sign off on the project as a whole.
It will explicitly demonstrate that the project is ended, that you went above and beyond the call of duty, and that any further 'surprises' are outside the contract/spec, which gives you a nice layer of protection beyond what you already (ostensibly) have.
If it's a UI issue, then you're in murkier water.
Does the spec adequately describe the UI? Does it have mockups? I wouldn't fault a customer for this complaint about the UI if the spec did not very closely describe the layout, usage, and include mockups.
Either way, I think you can understand the customer's position - if they haven't played with UI mockups, then they're going to be disappointed with the result regardless - there's no way, psychologically speaking, that you and your customer could have possibly had the same idea in mind (nevermind the fact that common sense isn't!).
Quite frankly if this is the first time the customer has thought about checking out the UI before the work is finished, then it's at least partially your fault for not explaining good UI design processes to them. This is a key feature for their app, and it's very tightly coupled to what they've imagined - no one can be satisfied in such a situation unless they've 'grown' their internal representation over time to match what the reality is.
This disconnect is solved only through frequent user and customer testing, which is obviously missing. This is a problem regarding client education and communication, not whether the specification was met or not.
-Adam
Expect last minute changes of scope - they always happen, so be ready.
Review progress frequently with client - to minimize surprises.
Contract: Functional Spec, plus Time & Materials with initial cap (so client feels control).
Then when changes come along, re-negotiate the cap if necessary.
Never say they can't have what they want. They can get that answer for free!
Always give them a little more than they asked for, so they know you've got a positive attitude.
Relate to the client as being on the same team with them. Don't accept being legalistically painted as an adversary.
They may think of contractors as not loyal, compared to employees. Show them you're as dedicated to their success as their employees are, and you'll go the extra mile.
Classic case...
There's not definite answer to this one, but it all turns around communication. There should have been preventive measures put in place (like weekly reviews or something like that).
For sure, you can't redo the whole thing for free.
Two ways: Or to tell them to ** off or you deal with it.
If you choose to deal:
First, empathize, respect the client.
Have a look at what can easily be changed.
Have a look at the contracts.
Maybe create a new agreement.
Don't do too much.
Make them see the progress and the work it takes.
Find workarounds for the missing features (maybe using other great features, or available tools.)
Use your common sense, it is so common, its not even funny.
This is one of the many drawbacks of a fixed bid arrangement. Any time business needs or priorities change, or there is even a simple misunderstanding, it results in anything from an awkward situation like this to calling lawyers in. If you have an arrangement where you get paid for development time, you can always react to any change and get paid for whatever time it takes to make that change. Also, having a by-the-hour arrangement does not preclude having a plan or making an estimate.
Once you are in a fixed bid pickle, though, your options are:
1) Do it at an additional cost.
2) Do it free.
3) Don't do it.
Option 3 is the worst, and Option 1 is the best. If you have a good trusting relationship and decent communication with the client, it's usually easy to arrive at Option 1. If the relationship is bad, then you've got bigger problems. At that point, just try to avoid laywers.
A final point - any project that has something known as "The Delivery Date" inevitably runs into the problem described. Projects with said date usually involve retreating to a cave for several months to develop in hiding followed by an unleashing of the product all at once in front of the stakeholders. This is abrupt and leaves plenty of time for client expectations and the actual product to drift apart. If, instead, you show intermediate versions of the product and gather feedback every few weeks, two things happen. First, you get better feedback, minimize misunderstandings, and make a better product. Second, there is no single point in time on which a massive amount of expectation is laid. The potential difference between what the client is imagining and what actually exists is much smaller. No surprises.
Good luck.
"how do you react?"
Question 1 - do you want to continue this relationship with this customer? Seriously. If they are going to claim that unspecified features are "common sense," this may not be a good relationship to maintain or enhance.
If you want to disengage, then that's easy. Ask for them to highlight each part of the specification that you failed to comply with and play that game. Get specific test criteria for each missing feature. Pull Teeth. Be confrontational in determining what's missing. Don't ask why. Just ask for all the details up front. It's slow and unpleasant. But you don't want them anyway.
If you want to engage, well, you're going to have to change the relationship. Currently, you have a Passive Aggressive Customer. They won't say what they want, but they will say what they don't want.
This may be a habit with them; this may be how they win concessions. Or this just may be sloppy specification on their part.
If you want the relationship, your reaction has two parts.
Short-term. Get something they're happy with. They have to identify specific changes. You have to score each change with a "cost to do" and "fit with specification".
Some things are cheap and a good fit. Do those.
Some things are cheap to do, but a bad fit with the specification. Think twice about enabling a bad specification to lead to rework. In a sense, you purchased the specification from them; you may need to raise your standards, also.
The expensive things which (sadly) fit the specification are a problem. You're in trouble with these, and pretty much have to do them.
The expensive things which don't fit well with the specification are lessons learned for everyone. Detail a plan for these, including specification rewrites and approvals.
Long-term. Make sure they you're not PA'd again. Review early and often, use Agile techniques. Communicate more, prototype more, release more.
Well, it was not successfully delivered. Somewhere along the line there was miscommunication. Without knowing the specifics I would suggest this is not a developer injected problem and this is probably not to be blamed on the customer - the requirements gathering task was insufficient. This is a classic example of what happens when the software side does not have domain experts or the requirements discovery process doesn't do all that it could...
If it was me I would correct the problem and figure out how to avoid similar issue in the future.
How you handle this can very well determine the future of this contract/business with the client. Taking responsibility and correcting the issue is a huge opportunity for your company.
EDIT:
This is a good time to evaluate how this happened to help correct it. Some companies choose to totally revamp everything they do which is a mistake I think. So is ignoring it. Blaming people for the problem is also a mistake.
It is a good time to walk through how this happened, what the process is, and maybe how it could have been caught. I would not make huge rule changes or process changes - but coming up with guidelines for future work is a great thing. Your company had a clear lesson about a shortcoming. Losing the opportunity to correct this problem and to correct your process would be a waste of a good chance.
ZiG, I've had to deal with this problem on several occasions at my current place of Employment. My group (3 developers) tries to approach things in an Agile manner. We're used to getting mid-stream and even last-second requests (which we then treat on a case-by-case basis).
However, we make it clear that resources (particularly time) are limited and if it's not in the spec we can't make promises. If it's judged important and it can't fit into the current release, we generally plan a followup release. If it isn't important, it goes on a list.
One thing I've found is that you can get users to agree to Spec S at Time T. However at Time T + N, getting them to remember they agreed to Spec S, or getting them to acknowledge that they did so (with the documentation you've been keeping, I hope!) can be trickier than it should be.
Speaking to the OP's subject and question:
If you are an employed programmer, then I would hope that other resources are in the meeting with you. Possibly "higher ups" in the organization.
If this is the case, then your job is to answer DIRECT questions, and to keep your emotions in check. Yes, you may feel injured because they don't love your code, but showing any emotion with bosses present is not a good thing. Rather, try and look neutral and let the others handle the session.
Now, if they "hang you out to dry", then I would recommend the following questions:
a) "OK. I see. Why exactly to you feel this is common sense to include this feature? I'd like to discover why we didn't include it." (force them to explain their thought process. Common sense to one person is rarely common sense to anyone else.)
b) "Well, I'm sure we could include that in the next release. I'll leave it up to XXX (the bosses) to come to a mutually agreeable approach" (i.e. don't talk cost or freebies with bosses present. EVER.)
Again, this assumes you are a programmer WORKING for a company that delivered the product. Now, if are more than that - i.e. you ARE one of the higher ups, then many of the suggestions here are excellent.
However, if you are the higher-up or are a consultant programmer, then first and formost
a) Apologize for the process that did not catch this requirement. Promise to work with the client to prevent this from recurring.
Then on to the other strategies. It really doesn't matter if you charge for the fix or not - the apology is the most important action to the client. Again, it bears repeating - you are not apologizing for the missed feature. You are apologizing for the faulty design process that let it slip. Clients are usually pretty accommodating when you start this way and then seek a solution.
Cheers,
-Richard
Use SCRUM like approaches to avoid this deathtrap: involve the client in the dev process early, frequently and in informal, restricted commitees -> risk reduction and improved agility.
In terms of your literal question, how to react, the best way is to ignore your ego ("what?! After I worked so hard on this and met the spec?!") and instead focus on some active listening and working to consensus.
Client: You could have included this!
Us: Was not in the specification!
Client: Common Sense!!
Us: I understand that you're not happy that we didn't go beyond the bounds of the specification. Seeing how you feel about this, how can we make you happy? Let's see if there's a process we can create together that will help everyone.
Essentially, you don't want to turn this into a "you said/I said" death match. The only way to resolve those involves lawyers and then nobody wins. If you can agree that the spec or the process was to fault, work together to fix those.
This approach actually just worked for me: wait for the guy who doesn't like your software to leave and be replaced by the guy who does like it.
Obviously you can't really rely on this, but if you're sure that you did a good job and that your software really will satisfy the business needs of the people who hired you, it does pay to wait it out. Sometimes the client's initial reaction will not be their final one, especially if you can quickly incorporate their concerns.
Don't try make the client feel like it is their fault. It might be their fault, but making them feel that way will not produce constructive results, and could just annoy them.
Instead, you should realize that clients only complain about software they use, in most cases because they like it. Nobody complains about software nobody uses. It is inevitable that a client will complain about the software you deliver, even if you deliver exactly what they ask for. So don't sweat it. Software is never done.
Total failure on the part of the person in charge of requirements collection, no doubt about it. Additional failure of the project management to not iterate the deliverable and have check-in meetings with the client.
However, you have a signed-off spec, and what you've delivered matches the spec. So, your company has two choices: write off the cost in the name of business development and make the change for free, or charge them for the change request.
If it ain't in the spec, it ain't in the spec. As a developer with no specific domain knowledge, 'common sense' is an irrelevant concept. Different industries work in different ways and one approach might be quite appropriate for a particular domain but completely unacceptable in the other.
Writing good specs is an art-form. IMO, you can either take an agile 'analyst/programmer' approach where you make small iterations or write and maintain a detailed, unambiguous specification. Both are highly skilled tasks, and are still iterative. You still have to evolve the specification.
Either way is not as easy as it sounds and both require the ability to establish a good working relationship with the client.
You cannot know what your customer think in his head. This situation occur often with client that haven't got any experiences with programming project. What I suggest to you is to simply show him that "common sense" isn't very accurate as answer in engineering (or programming if you prefer).
Show him other example in life that will show him that you cannot build something that aren't written. Example: building a new house, the guy who build the house need a plan with all detail... he won't put optional electric plug because in the living room it's more "common sense" to have some extra...
I had this once. And luckily it wasn't me that created the design because that proved to be the problem.
It is of vital importance that the communcation between your company and the client is as perfect as possible. Be sure you understand each other. Ask questions and let them ask questions. Do not let anything open in the design. This will be the problem point at delivery. And have regular meetings during the project (preferably with a prerelease).
Unfortunately a lot of developers are bad at communciation, and a lot of clients are not aware of their own needs. But if you can minimize the gap, you have found yourself a happy (and returning) customer.
This is why I/the teams I worked with always used a prototype-style approach, that means:
after collecting the requirements, you show the client an early and basic release of the software
the client says "you could have included this"/"it's common sense"
you change your design to reflect the client's desiderata
iterate from point 1 till the official release
You have to start it early on; tell the customer, early and often, that the spec/use-cases/user-stories are a contract which define what will be delivered. in an agile environment there are plenty of chances for the customer to observe some "common sense" feature they want and ask for it, which is one of the advantages of an agile approach, but if you start accepting "common sense" additions at the end, you are preparing yourself for infinite extensions, probably at your expense.
Some customers expect this; the more and better you tell them they can't, the easier the eventual arguments will be.
As a junior guy, I realize you can't do this -- yet -- but one of the hard-but-necessary lessons is that sometimes you have to fire a customer.
You learn - everything is learning and nothing is personal.
We are experts in our area we know better than customer what he need. And next time for next customer we will suggest all useful features in advance and make him happy and will make him pay more money because we are the experts and we know better.

Resources