Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What types of applications have you used model checking for?
What model checking tool did you use?
How would you summarize your experience w/ the technique, specifically in evaluating its effectiveness in delivering higher quality software?
In the course of my studies, I had a chance to use Spin, and it aroused my curiosity as to how much actual model checking is going on and how much value are organizations getting out of it. In my work experience, I've worked on business applications, where there is (naturally) no consideration of applying formal verification to the logic. I'd really like to learn about SO folks model checking experience and thoughts on the subject. Will model checking ever become a more widely used developing practice that we should have in our toolkit?
I just finished a class on model checking and the big tools we used were Spin and SMV. We ended up using them to check properties on common synchronization problems, and I found SMV just a little bit easier to use.
Although these tools were fun to use, I think they really shine when you combine them with something that dynamically enforces constraints on your program (so that it's a bit easier to verify 'useful' things about your program). We ended up taking the Spring WebFlow framework, which uses XML to write a state-machine like file that specifies which web pages can transition to which other ones, and using SMV to be able to perform verification on said applications (shameless plug here).
To answer your last question, I think model checking is definitely useful to have, but I lean more towards using unit testing as a technique that makes me feel comfortable about delivering my final product.
We have used several model checkers in teaching, systems design, and systems development. Our toolbox includes SPIN, UPPAL, Java Pathfinder, PVS, and Bogor. Each has its strengths and weaknesses. All find problems with models that are simply impossible for human beings to discover. Their usability varies, though most are pushbutton automated.
When to use a model checker? I'd say any time you are describing a model that must have (or not have) particular properties and it is any larger than a handful of concepts. Anyone who thinks that they can describe and understand anything larger or more complex is fooling themselves.
What types of applications have you used model checking for?
We used the Java Path Finder model checker to verify some security (deadlock, race condition) and temporal properties (using Linear temporal logic to specify them). It supports classical assertions (like NotNull) on Java (bytecode) - it is for program model checking.
What model checking tool did you use?
We used Java Path Finder (for academic purposes). It's open source software developed by NASA initially.
How would you summarize your experience w/ the technique, specifically in evaluating its effectiveness in delivering higher quality software?
Program model checking has a major problem with state space explosion (memory & disk usage). But there are a wide variety of techniques to reduce the problems, to handle large artifacts, such as partial order reduction, abstraction, symmetry reduction, etc.
I used SPIN to find a concurrency issue in PLC software. It found an unsuspected race condition that would have been very tough to find by inspection or testing.
By the way, is there a "SPIN for Dummies" book? I had to learn it out of "The SPIN Model Checker" book and various on-line tutorials.
I've done some research on that subject during my time at the university, expanding the State Exploring Assembly Model Checker.
We used a virtual machine to walk each and every possible path/state of the program, using A* and some heuristic, depending on the kind of error (deadlock, I/O errors, ...)
It was inspired by Java Pathfinder and it worked with C++ code. (Everything GCC could compile)
But in our experiences this kind of technology will not be used in business applications soon, because of GUI related problems, the work necessary for creating an initial test environment and the enormous hardware requirements. (You need lots of RAM and disc space, because of the gigantic state space)
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Considering the fact that currently many libraries already have optimized sort engines, then why still many companies ask about Big O and some sorting algorithms, when in reality in our every day in computing, this type of implementation is not any longer really needed?
For example, self balancing binary tree, is a kind of problem that some big companies from the embedded industry still ask programmers to code as part of their testing and candidate selection process.
Even for embedded coding, there are any circumstances when such kind of implementation is going to take place, given fact that boost, SQLite and other libraries are available for use? In other words, is it worth still to think on ways to optimize such algorithms?
As an embedded programmer, I would say it all comes down to the problem and system constraints. Especially on a microprocessor, you may not want/need to pull in Boost and SQLite may still be too heavy for a given problem. How you chop up problems looks different if you have say, 2K of RAM - but this is definitely the extreme.
So for example, you probably don't want to rewrite code for a red-black tree yourself because as you pointed out, you will likely end up with highly non-optimized code. But in the pursuit of reusability, abstraction often adds layers of indirection to the code. So you may end up reimplementing at least simpler "solved" problems where you can do better than the built-in library for certain niche cases. Recently I wrote a specialized version of linked lists using shared memory pools for allocation across lists, for example. I had benchmarked against STL's list and it just wasn't cutting it because of the added layers of abstraction. Is my code as good? No, probably not. But I was more easily able to specialize the specific uses cases, so it came out better.
So I guess I'd like to address a few things from your post:
-Why do companies still ask about big-O runtime? I have seen even pretty good engineers make bad choices with regards to data structures because they didn't reason carefully about the O() runtime. Quadratic versus linear or linear versus constant time operation is a painful lesson when you get the assumption wrong. (ah, the voice of experience)
-Why do companies still ask about implementing classic structures/algorithms? Arguably you won't reimplement quick sort, but as stated, you may very well end up implementing slightly less complicated structures on a more regular basis. Truthfully, if I'm looking to hire you, I'm going to want to make sure that you understand the theory inside and out for existing solutions so if I need you to come up with a new solution you can take an educated crack at it. And if the other applicant has that and you don't, I'd probably say they have an advantage.
Also, here's another way to think about it. In software development, often the platform is quite powerful and the consumer already owns the hardware platform. In embedded software development, the consumer is probably purchasing the hardware platform - hopefully from your company. This means that the software is often selling the hardware. So often it makes more cents to use less powerful, cheaper hardware to solve a problem and take a bit more time to develop the firmware. The firmware is a one-time cost upfront, whereas hardware is per-unit. So even from the business side there are pressures for constrained hardware which in turn leads to specialized structure implementation on the software side.
If you suggest using SQLite on a 2 kB Arduino, you might hear a lot of laughter.
Embedded systems are bit special. They often have extraordinarily tight memory requirements, so space complexity might be far more important than time complexity. I've rarely worked in that area myself, so I don't know what embedded systems companies are interested in, but if they're asking such questions, then it's probably because you'll need to be more acquainted with such issues than in other areas of I.T.
Nothing is optimized enough.
Besides, the questions are meant to test your understanding of the solution (and each part of the solution) and not how great you are at memorizing stuff. Hence it makes perfect sense to ask such questions.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am a big proponent of agile, but a friend of mine (who doesn't know agile yet - hes a managerial type ^^) asked me how I would go about planning and developing a complex distributed project, with a database layer, comms layer, interface, and integration into embedded devices.
The agile method emphasises the concept of releasing early and iterating, but in the scenario of a project with many inter-connected components that all need to be functional for the whole thing to work, it would be difficult to release an early version without working on all the components. How would agile help my friend here? How best would he utilize it?
Teams in my company face the same types of problems. We are building projects with a large number of moving parts and architectural layers that make it difficult to create a working product early on. Additionally, there are often specialty resources that need to be scheduled or slightly out of synch with the team. Some approaches we've taken are below It has been challenging, but these approaches seem to be helping.
Build as vertically as possible
In other words, strive to have something working, end to end as quickly as possible. We typically get there a few sprints in on a 9-16 month project.
You'll often find a significant number of layers can be mocked or held back.
Often, the initial customer facing components are place holders. We create a limited bit of functionality that is something like what the customer wants, but is likely to be very different in the final project. This allows us to prove the rest of the product at a system level and provide visibility from a system perspective.
Separate base architecture from the product
Our early sprints are often centered around infrastructure/architecture. For example, threading subsystems, performance monitoring, communications and test frameworks.
Treat the subsystems as separate deliverables
Fully define each subsystem
Complete (truly complete, not just a partial implementation) each subsystem
Load test each subsystem within the context of how it will be used in the final product
Make your first iteration to be dedicated to architectural design, including the identification of the necessary components and the definition of the relationships and communications between them.
Once you have a clear picture of how the components interact, build the skeleton of each one. That is, implement "stub" components that just have the communication part on place, and the rest of the functionnality just do nothing or return test data. Have an interation dedicated to this task (including testing the component communication mechanisms) as well.
Then you can plan iterations to fully develop each component in the appropriate order, so that the system can grow in a ordered way.
TDD - iterate with incomplete parts after writing tests. Mock the bits that aren't ready. Sounds exciting.
It is unlikely that each layer needs to be complete for it to be usable by the other layers - for example the persistence layer could just serialize objects to a file initially, and converted to use a database when the need arises. I would look at implementing the minimum of each layer needed by the initial stories and fleshed out to add functionality as the the system grows.
Growing a system this way means you only implement the functionality you need, rather than all the functionality you think you may need at some indeterminate time in the future.
If you cannot break the large project into smaller parts that are useful (i.e. enable some use cases) on their own, agile probably won't help you that much in this project. You can pick some techniques like pair programming, refactoring etc. but the overall planning has do be done in a conventional way.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Agile emphasizes quick iterations without wasteful planning.
MVC emphasizes separation of concerns based on a planned architecture.
Since non-MVC technologies require less planning, could they be more appropriate in an Agile project?
Separation of concerns does not necessitate that you plan out every detail before you start coding. And agile does not mean that you just write the code down as it comes to mind. Agile means not being too attached to your initial idea of what the project will look like and to be ready to refactor should the need arise (as it usually does), not being afraid to throw big pieces of code out in the process.
Separation of concerns can very well make refactoring a lot easier, so MVC can be a big helper of agility.
Agile development is typically a process of rapid prototyping and refactoring. MVC's separation of concerns can often make both processes easier and faster.
Design patterns are a fundamental part of quick development. Popular design patterns are popular because they have wide utility. Relying heavily on patterns can make a workable architecture for a project crystallize much more quickly. The common vocabulary afforded by design patterns make it easier for a team to communicate the structures of a project and focus on the domain specific issues. Should one pattern turn out to be inconvenient for the progress of the project, the relation ship that pattern has with other alternatives are likely well understood, simplifying the task of refactoring to an alternative layout.
That being said, the MVC pattern has tremendous gravity. One of the major reasons it works so well is that it tends to emphasize API's. This sort of isolation makes it much easier to change certain parts of a system without having a major effect on unrelated parts. If a layer of the system has a defect, it's normally easy to alter that layer without affecting other layers, because they are separated by a well defined API. If an API is itself deficient, then it is often possible to alter the API exposed without effecting the actual logic of either layer (Although this tends to be more difficult than the first kind of deficiency).
When you find the right balance between structure and flexibility, it's worth its weight in gold.
I tend not to like most (current) MVC paradigms, because I believe they introduce pointless abstraction, reinvent the wheel, and add a lot of rigidity.
But I also tend to have highly structured programs that separate content from business logic from data access, and have as few "configurations" as possible in order to accomplish 1 thing. Ideally, to accomplish 1 thing, you should only have to edit 1-2 things.
Needless abstraction is the root of many problems.
The key phrase in agile is 'the simplest thing that could possibly work'.
If the simplest solution to a problem is:
a single script
a single web page
a single installation of a standard tool like a wiki
a single-user single-database 'just edit the data' editor
Then those won't have MVC, and will be the appropriate agile solutions.
If it is obvious from the start of the project that nothing like that is going to come close to solving the problem, it would be pointlessly literal process-following to try them and wait to fail before trying the next simplest solution.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As we are a small company, I work as both a project manager and developer. The specifications I create for clients contain a number of elements used to describe and define the project, including user stories alongside any other elements I feel need to be included to define the project (e.g. wireframes, userflows, sitemaps etc.) to the client.
If a functional specification “describes how a product will work entirely from the user's perspective. It doesn't care how the thing is implemented. It talks about features.“. Then does anyone see any problem with using User Stories to define a functional specification for a website? Does anyone actually do functional specifications in this way?
Really I am trying to up my game a little, and wondering if this would approach would work for larger clients who perhaps have more stringent ideas on what a functional specification should contain, whereby a formal approach may be required. Definitely at the moment our clients respond well to our method of producing documentation.
I am interested in hearing what people who do project management professionally think about this.
I'm at odds with what a couple of other people have said.
First up the bit I agree with - stories are a great way of stating functional requirements. For my money they're one of the best ways of actually communicating requirements in a way end users will really take in. I've seen too many specs that get signed off without ever having been read.
The one thing I would say you might want to append to them is non-functional requirements - covering performance, security, data volumes, audit, archive and so on. While they can be covered in stories, sometimes they're better covered in a way that crosses all stories.
In terms of whether it's suitable for large companies this is where I disagree. In my experience (and I've done projects for Shell, American Express, a couple of multi-national banks and others) they're often no more formal than smaller companies so they'll be fine with stories. The reality is that a user in a large company is no better equiped (or interested) in reading class and sequence diagrams than they are elsewhere.
The size and complexity of the project may require more detailed requirements but it's the size of the project, not the size of the company that matters when you're determining how you document requirements.
For me the best requirements documentation is documentation that's suited to it's audience, and for me user stories hit the nail on the head most of the time - they're short enough and clear enough that when they sign them off they mean something because they've read and understood them (as opposed to the sign off of a 100 page spec which invariably means they haven't really read it), but good enough for the developers to work from too.
Yes, you can use user stories for your functional requires. I do it all the time, and have been for years. In my opinion, it works really well, and better than other systems I have used.
Would this approach work for larger clients? To make a gross generalization, no. They are going to have some system that use to define requirements, and likely its not user stories. If you come in with user stories, there is going to be a disconnect with the current practices, which you will have to work through.
I have been successful using user stories with larger organizations, but it take a concerted effort, which both parties need to be committed to.
What you're describing are the use-case scenarios that define the features, this is what is required from a usability perspective and is exactly what the client will understand and agree to. Screen mockups and flow diagrams will definately help both the client and developers.
An implementation specification may then be required to instruct developers on what needs to be included in the actual construction, the depth of this will be determined by your developers capabilities that include their knowledge of the house architecture/framework and methodologies/conventions and may include specifics on what impacts various parts of the application.
We also work in small teams (sometimes one or two developers) and believe the above is all that's required in this instance.
Larger companies with much larger teams may use Modeling Software, UML diagrams and other more detailed specifications. In the case where you the primary developer, you should still spend the time designing your application, but if nobody is going to review the designs and insist on all the formalities, your time is better spend implementing the software.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In my experience, UI programming is very time consuming, expensive (designers, graphics, etc) and error prone - and by definition UI bugs or glitches are very visible embarasing.
What do you do to mitigate this problem?
Do you know of a solution that can automatically convert an API to a user interface (preferably a Web user interface?).
Probably something like a JMX console
with good defaults
can be tweaked with css
where fields can be configured to be radio button or drop down list, text field or text area, etc
localizable
etc
Developing UI is time consuming and error-prone because it involves design. Not just visual or sound design, but more importantly interaction design. A good API is always interaction model neutral, meaning it puts minimal constraints on actual workflow, localisation and info representation. The main driver behind this is encapsulation and code re-use.
As a result it is impossible to extract enough information from API alone to construct a good user interface tailored to a specific case of API use.
However there are UI generators that normally produce CRUD screens based on a given API. Needless to say that such generated UI's are not very well-suited for frequent users with demands for higher UI efficiency, nor are they particularly easy to learn in case of a larger system since they don't really communicate system image or interaction sequence well.
It takes a lot of effort to create a good UI because it needs to be designed according to specific user needs and not because of some mundane API-UI conversion task that can be fully automated.
To speed the process of building UI up and mitigate risks it is possible to suggest either involving UI professionals or learning more about the job yourself. Unfortunatelly, there is no shortcut or magic wand, so to speak that will produce a quality UI based entirely and only on an API without lots of additional info and analysis.
Please also see an excellent question: "Why is good UI design so hard for some developers?" that has some very insightful and valuable answers, specifically:
Shameless plug for my own answer.
Great answer by Karl Fast.
I don't believe UI programming is more time consuming than any other sort of programming, nor is it more error prone. However, bugs in the UI are often more obvious. Spotting an error in a compiler is often much more tricky.
One clear difference between UI programming is that you have a person at the other end, instead of another program, which is very often the case when you're writing compilers, protocol parsers, debuggers, and other code which talks to other programs and computers. This means that the entity you're communicating with is not well-specified and may behave very erratically.
EDIT: "unpredictable" is probably a more appropriate term. /Jesper
Your question of converting an API to a user interface just doesn't make sense to me. What are you talking about?
Looks like you are looking for the 'Naked Objects' Architectual pattern. There are various implementations available.
http://en.wikipedia.org/wiki/Naked_objects
I'm not providing a solution, but I'll attempt to answer the why.
So I don't speak for everyone, but for me at least, I believe one reason is because programmers tend to concentrate on functionality more so than usability and they tend not to be too artistic. I think they just tend to have a different type of creativity. I find that it takes me a long to time to create the right graphics, compared to how long it takes me to write the code (Though, for the most part, I haven't done any projects with too many graphical requirements).
Automatically generating user interfaces may be possible to some extent, in that it can generate controls for the required input and output of data. But UI design is much more involved than simply putting the required controls onto a screen. In order to create a usable, user friendly UI, knowledge from disciplines such as graphics design, ergonomics, psychology, etc. has to be combined. There is a reason that human-computer interaction is becoming a discipline of its own: its not trivial to create a decent UI.
So I don't think there's a real solution to your problem. UI design is a complex task that simply takes time to do properly. The only area where it is relatively easy to win some time is with the tooling: if you have powerful tools to implement the design of the user interface, you don't have to hand-code every pixel of the UI yourself.
You are absolutely correct when you say that UI is time consuming, costly and error prone!
A great compromise I have found is as follows...
I realized that a lot of data (if not most) can be presented using a simple table (such as a JTable), rather than continuously try to create custom panels and fancy GUI's. It doesn't seem obvious at first, but it's quite decent, usable and visually appealing.
Why is it so fast? Because I was able to create a reusable framework which can accept a collection of concrete models and with little to no effort can render all these models within the table. So much code-reuse, its unbelievable.
By adding a toolbar above the window, my framework can add to, remove from or edit entries in the table. Using the full power of JTables, I can hide (by filtering) and sort as needed by extending various classes (but only if/when this is required).
I find myself reusing a heck of a lot of code every time I want to display and manage new models. I make extensive use of icons (per column, rows or cells, etc) to beautify the screens. I use large icons as a window header to make each screen 'appear' different and appealing and it always looks like new and different screens, but its always the same code behind them.
A lot of work and effort was required at first to do the framework, but now its paying off big time.
I can write the GUI for an entirely new application with as many as 30 to 50 different models, consisting of as many screens in a fraction of the time it would take me using the 'custom UI method'.
I would recommend you evaluate and explore this approach!
if you already know or could learn to use Ruby on Rails, ActiveScaffold is excellent for this.
One reason is that we don't have a well-developed pattern for UTDD - User Test Driven Development. Nor have I seen many good examples of mapping User Stories to Unit Tests. Why, for example, do so few tutorials discuss User Stories?
ASP.NET Dynamic Data is something that you should investigate. It meets most, if not all your requirements
It's hard because most users/customers are dumb and can't think straight! :)
It's time consuming because UI devs/designers are so obsessive-compulsive! :)