Tools to track my coding progress in Ruby [closed] - ruby

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In a nutshell, I'm looking for tools for tracking my progress in "fleshing-out" a complex system in Ruby.
Usually when I start working on a new system in Ruby, I first write an outline.rb file that contains stub class definitions for all the classes I think I'll want to use. Then I gradually implement the functionality.
Are there any tools out there for quickly surveying my stubs and keeping track of which ones still need to be implemented, and how long each implementation took me, in hours?

I usually track my progress through my tests. For example, if you're doing TDD/BDD, you could use rspec and create tests that are marked as "pending"-tests without a body basically.
Take this gist for example (https://gist.github.com/4150506)
describe "My API" do
it "should return a list of cities (e.g. New York, Berlin)"
it "should return a list of course categories"
it "should return a list of courses based on a given city"
it "should return a list of courses based on a category and city"
end
In it, I list a few tests that I expect the system to pass once all the implementation details are in place. This allows me to get an overall view of what I'm building without getting too deep too quickly.
Update: The idea is to be able to run the specs at the command line and rspec will tell you which tests are passing, failing or pending.
As for the time tracking part, I just use a timer app (tickspot.com for example). You can always make note of the timestamps on your spec files too to get a sense of when you started modifying the files and when you stopped.
Hope that helps.

My answer is essentially "no".
How do you define "done"? Not "just a stub", or exhibiting complete behavior? How do you define "complete" behavior? What about methods you didn't stub originally, of which I can only imagine there would be dozens, if not hundreds.
Time against stubbed methods doesn't strike me as a meaningful statistic, rather time against functionality. This should be handled by issue tracking tickets and commit logs, but that will reflect overall time, not specifically time on task, which is often significantly different.
I don't see how this can be done with any real accuracy over a project of any significant size without very granular issue tracking, time entry, and unit and behavioral tests. Even then, you'd likely need to build out some tools to help with your particular methodology.

Related

How to document undefined behaviour in the Scrum/agile/TDD process [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We're using a semi-agile process at the moment where we still write a design/specification document and update it during the development process.
When we're refining our requirements we often hit edge cases where we decide it's not important to handle it, so we don't write any code for that use case and we don't test it. In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way.
In a more fully-fledged agile process, the tests are supposed to act as a specification for the expected behaviour of the system, but how would you record the fact that a certain scenario is explicitly out-of-scope rather than just getting accidentally missed out?
As a bit of clarification, here's the situation I'm trying to avoid: We have discussed a scenario and decided we won't handle it because it doesn't make sense. Then later on, when someone is trying to write the user guide, or give a training session, or a customer calls the help desk, exactly the same scenario comes up, so they ask me how the system handles it, and I think "I remember talking about this a year ago, but there are no tests for it. Maybe it got missed of the plan, or maybe we decided it wasn't a sensible use-case, or maybe there's a subtle reason why you can't actually ever get into that situation", so I have to try and search old skype chats or emails to find out the answer. What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future. At the moment I put this in the spec where everyone can see it.
I would document deliberately unsupported use cases/stories/requirements/features in your test files, which are much more likely to be regularly consulted, updated, etc. than specifications would be. I'd document each unsupported feature in the highest-level test file in which it was appropriate to discuss that feature. If it was an entire use case, I'd document it in an acceptance test (e.g. a Cucumber feature file or RSpec feature spec); if it was a detail I might document it in a unit test.
By "document" I mean that I'd write a test if I could. If not, I'd just comment. Which one would depend on the feature:
For features that a user might expect to be there, but for which there is no way for the user to access (e.g. a link or menu item that simply isn't present), I'd write a comment in the appropriate acceptance test file, next to the tests of the related features that do exist.
Side note: Some testing tools (e.g. Cucumber and RSpec) also allow you to have scenarios or examples in feature or spec files which aren't actually run, so you can use them like comments. I'd only do that if those disabled scenarios/examples didn't result in messages when you ran the tests that might make someone think that something was broken or unfinished. For example, RSpec's pending/skip loudly announces that there is work left to be done, so it would probably be annoying to use it for cases that were never meant to be implemented.
For situations that you decided not to handle, but which an inquisitive user might get themselves into anyway (e.g. entering an invalid value into a field or editing a URL to access a page for which they don't have permission), don't just ignore them, handle them in a clean if minimal way: quietly clear the invalid value, redirect the user to the home page, etc. Document this behavior in tests, perhaps with a comment explaining why you aren't doing anything even more helpful. It's not a lot of extra work, and it's a lot better than showing the user an error page or other alarming behavior.
For situations like the previous, but that you for some reason decided not to or couldn't find a way to handle at all, you can still write a test that documents the situation, for example that entering some invalid value into a form results in an HTTP 500.
If you would like to write a test, but for some reason you just can't, there are always comments -- again, in the appropriate test file near tests of related things that are implemented.
You should never test undefined behavior, by ...definition. The moment you test a behavior, you are defining it.
In practice, either it's valuable to handle a hedge case or it isn't. If it is, then there should be a user story for it, which acts as documentation for that edge case. What you don't want to have is an old user story documenting a future behavior, so it's probably not advisable to document undefined behavior in stories that don't handle it.
More in general, agile development always works iteratively. Edge case discovery is part of iterative work: with work comes increased knowledge, with increased knowledge comes more work. It is important to capture these discoveries in new stories, instead of trying to handle everything in one go.
For example. suppose we're developing Stack Overflow and we're doing this story:
As a user I want to search questions so that I can find them
The team develops a simple question search and discovers that we need to handle closed questions... we hadn't thought of that! So we simply don't handle them (whatever the simplest to implement behavior is). Notice that the story doesn't document anything about closed questions in the results. We then add a new story
As a user I want to specifically search closed questions so that I can find more results
We develop this story, and find more edge cases, which are then more stories, etc.
In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way
Having undocumented functionality in your product really is a bad practice.
If your development team followed BDD/TDD techniques they should (note emphasis) reduce the likelihood of this happening. If you found this edge-case then what makes you think your customer won't? Having an untested and unexpected feature in your product could compromise the stability of your product.
I'd suggest that if an undocumented feature is found:
Find out how it was introduced (common reason: a developer thought it might be a good feature to have as it might be useful in the future and they didn't want to throw away work they produced!)
Discuss the feature with your Business Analysts and Product owner. Find out if they want such a feature in your product. If they do, great, document and test it. If they don', remove it as it could be a liability.
You also had a question regarding the tracking of the outcome of these edge-case scenarios:
What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future.
As you are writing a design/specification document, one approach you could take is to version that document. Then, when a feature/scenario is taken out you can note within a version change section in your document why the change was made. You can then refer to this change history at a later date.
However I'd recommend using a planning board to keep track of your user stories. Using such a board you could write a note on the card (virtual/physical) explaining why the feature was dropped which also could be referred to at a later date.

Why and when to use an Interface with Linq [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Assuming you have three layers (Business, Data and UI). My data layer would have a linq to sql file with all the tables added.
I've seen some examples where an Interface is created in the business layer and then implemented in another class (type is of IQueryable/IEnumerable), yet other classes are using normal Linq syntax to get/save/delete/update data.
Why and when would i use an Interface which has an IQueryable/IEnumerable type?
Two of the most common situations, which you may want to do this are:
you want to protect yourself from changes to that part of your system.
you want to be able to write good unit tests.
For example, you have a business layer that talks directly to LINQ to SQL. In the future you mat have a requirement to use nHibernate or Entity Framework instead. Making this change would impact on your business layer, which is probably not good.
Instead, if you have programmed to an interface (say IDataRepository), you should be able to swap in and out concrete implementations like LINQtoSQLRepository or HibernateRepository without having to change your business layer - it only cares that it can call, say Add(), Update(), Get(), Delete() etc - but doesn't care how these operations are actually done.
Programming to interfaces is also very useful for unit testing. You don't want to be running tests against a database server for a variety of reasons such as speed and reliability. So, you can pass in a test double, fake or mock implementation to test your data layer. E.g. You have some test data that implements your IDataRepository, which allows tou to then test add(), delete() etc from your business layer without having a DB connection.
These points are generally good practice in all aspects of your application. I suggest reading up on The Repository Pattern, SOLID principles and maybe even Test Driven Development. This is a large and sometimes complex area and its difficult to give a detailed answer of exactly what to do and when as it needs to suit your scenario.
I hope this helps you get started.

Best practice of data validation in enterprise application [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm studying e-Commerce like web application. In one case study, I’m in trouble of mass data validation. What is the best practice of that in enterprise application?
Here is one scenario:
For a cargo system. There is a “Cargo” object, which contains a list of “Good” to be shipped with. Each “Good” have a string field, named “Category”, specifying what kind of “Good” it is. Such as “inflammable”, “Fragile”.
So, there are two chances for the validation to take place. The creation of the object. Or the storage in the database of the object. If we only validate at the storage stage, when some “Good” validation fails, the “Cargo” storage fails too, and the previously stored “Goods” need to be deleted. This is low efficient. If we also validate at the creation stage. There will be duplicated validation logic(a check of foreign key as I stores those “Category” in the database, and a check in the constructor).
If you are saving multiple records to the database, all the updates should be done at once in a single transaction. So you would validate ALL the objects before saving. If there was an issue during the save you could then rollback the transaction which rolls back all the database updates (ie you dont have to go back and manually delete records)
Ideally you should validate on the server, before saving data, the server validation should then propagate the validation messages back up to the User/UI. Validation on the Client/UI is also good in that its more responsive and reduces the overhead on the rest of the system.

What's the ugliest code you were forced to write by outside limitations? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What is the ugliest code that you wrote - not because you didn't know better, but because of limitations of the software, the hardware or the company policy?
Because of unusual choices in database layouts and programming languages, I once built a C program that read in a SQL database structure and generated another C program that'd read that database and back it up into a file, or copy it into a second database that shared more or less the same columns. It was a monster clunky code generator.
Any regular expression. :)
In the late 90s I had to write several web sites in Informix Universal Server web blade (aka Illustra web blade)
For anyone who doesn't know anything about this execrable environment, it forced you to use the most bizarre language I have ever come across. As Joel Spolsky described it
When it did run, it proved to have the only programming language I've ever seen that wasn't Turing-equivalent, if you can imagine that.
More on it here http://philip.greenspun.com/wtr/illustra-tips.html
And an example of a 'simple' if condition:
cond=$(OR,$(NXST,$email),$(NXST,$name),$(NXST,$subject))
One example of it's dire nature was the fact that it had no loops. Of any kind. It was possible to hack looping functionality by creating a query and iterating through its rows, but that is so wrong it makes me feel sick.
edit: I've managed to find a complete code sample. Behold:
<HTML>
<HEAD><TITLE>WINSTART bug</TITLE></HEAD>
<BODY>
<!--- Initialization --->
<?MIVAR NAME=WINSIZE DEFAULT=4>$WINSIZE<?/MIVAR>
<?MIVAR NAME=BEGIN DEFAULT=1>$START<?/MIVAR>
<!--- Definition of Ranges ---->
<?MIVAR NAME=BEGIN>$(IF,$(<,$BEGIN,1),1,$BEGIN)<?/MIVAR>
<?MIVAR NAME=END>$(+,$BEGIN,$WINSIZE)<?/MIVAR>
<!--- Execution --->
<TABLE BORDER>
<?MISQL WINSTART=$BEGIN WINSIZE=$WINSIZE
SQL="select tabname from systables where tabname like 'web%'
order by tabname;">
<TR><TD>$1</TD></TR>
<?/MISQL>
</TABLE>
<BR>
<?MIBLOCK COND="$(>,$BEGIN,1)">
<?MIVAR>
<A HREF=$WEB_HOME?MIval=WINWALK&START=$(-,$BEGIN,$WINSIZE)&WINSIZE=$WINSIZE>
Previous $WINSIZE Rows </A> $(IF,$(<,$MI_ROWCOUNT,$WINSIZE), No More Rows, )
<?/MIVAR>
<?/MIBLOCK>
<?MIBLOCK COND="$(AND,$(>,$END,$WINSIZE),$(>=,$MI_ROWCOUNT,$WINSIZE))">
<?MIVAR>
<A HREF=$WEB_HOME?MIval=WINWALK&START=$END&WINSIZE=$WINSIZE>
Next $WINSIZE Rows </A>
<?/MIVAR>
<?/MIBLOCK>
</BODY>
Once upon a time, I was working for a small programming house with a client who had a legacy COBOL application that they wanted converted to Visual Basic. I was never a fan of VB, but that's not an unreasonable thing to want.
Except that they wanted the interface to be preserved and to function identically to the existing version.
So we were forced to produce a VB app consisting of a single form with a grid of roughly 100 text entry boxes, all of which were completely passive. Except the one in the bottom right, which had a single event handler that was several thousand lines long and processed all the data in all the entry boxes when you exited the field.
I have my pride and do not write extreme ugly code (although the definition of ugly changes with experience). My boss pays me to write code and he expects it to be good.
Sometimes you have to write hacks. But you always have to claim the right to fix these later on else you will be faced with the concequences later on.
A program that exchanged information between two applications. Needless to say the data between the two programs was in different format, different use-cases, and even meant different things from one app to the other. There were TONS of special cases and "nice" conversions:
if (InputString == "01"))
{ Output.ClientID = Input.Address;}
else if ((InputString = "02") && (Input.Address == null) &&(Input.ClientID < 1300))
{ Output.ClientID = Input.ClientID +1;}
else if (Input.ClientID = 0 )
{ Input.ClientID = 2084; }
And on, and on for hundreds of lines.
This was for internal use in a large manufacturing plant... I cried durring most of the time I worked there.
I worked for an insurance management company. We processed online insurance applications back in the early 2000s when online quotes and applications were a bit more rare.
The ugliest part of the system was that we had to send the information back to the underwriting company. While we could gather lots of wonderful data we were forced to write all this data out to a PDF based on the physical form somebody could fill out by hand. We then would take a small subset of the data and transmit that data to the underwriters along with the filled out application. The application PDF would go into their document imaging system and the data would be placed in their ancient fixed-width database. As far as the underwriters were concerned most of the data only existed on that PDF.
We joked that the underwriters probably printed the PDF forms in order to scan them into the document imaging system. It wouldn't have surprised me if they did.

Techniques for redesigning convoluted UI [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm sure you've all seen them. Line of Business UIs that have logic such as: "When ComboA is selected, query for values based on that selection, and populate Textbox B", or "When ButtonC is pressed, disable Textboxes C and D", and on and on ... it gets particularly bad when you can have multiple permutations of the logic above.
If presented with the chance to redesign one of these lovely screens, how would you approach it? would you put a wizard in front of the UI? would you keep the single screen paradigm but use some other pattern to make the logic of the UI state maintainable? what would be the process you use to determine how this would ideally be presented and implemented?
Not that I think this should matter for the responses, but I am currently presented with just this "opportunity", and it's an ASP.NET web page that uses javascript to respond to the user's choices, disable controls, and make ajax calls for additional data.
Something you might want to look at is whether some of those dependencies don't imply that while looking similar and serving the same functions these elements should be split out over multiple pages that are similar but actually different. Someone maybe grouped these onto on page because there were enough similarities.
If you can try looking at the problem as if it were not implemented at all, how would you structure the user interface if you had to implement it now. If it is too radically different and existing users would have major problems, you might have to compromise. But as Elie said look at it from the users view. They are the ones that have to work with your product.
I would model the state of the whole UI on one object. That object should keep track of the state in which each UI object should be in, including the list of options of a combo box (and which option is selected of course).
That means that having one state object you can correctly re-draw the whole screen and not end up in broken states on the UI. Of course, refreshing all the components each time anything changes is not the way to go, so I would refresh them on callbacks from each of the setters in the state object. This would also allow you to have two UIs over the same state if you ever want that.
Start with the KISS principle, and work from there. Don't over-engineer the solution, and try to think about the problem from the user's POV. Your first impression of what would make a good layout is probably close to what you should be building, as a good UI is intuitive.
That being said, single screen versus multiple screens, JavaScript or AJAX, it doesn't really matter. If it looks good, is easy to understand, and behind the scenes, is well commented and written with clear code, it works. It should be maintainable, so aim for modular code blocks with clear functionality.
I think what is most important is the user experience and to a lesser extent the maintainability of the code. On the web I try to minimize the roundtrips as much as possible so I'm not sure that I would take a wizard approach since that would lead the user through multiple pages or require replacing nearly the entire page via AJAX (which just seems wrong). I typically work with my customers to capture the functionality that they require, though I aim at the functionality, not the implementation. I might mock up a few examples to show them alternatives or just hand draw them on a whiteboard to give them ideas. I don't mind doing "hard" or "complex" things in the app if the result is a much improved user interface. Of course, I make it as simple as I can and definitely use good practices, even in Javascript.

Resources