General: how to validate a proposed system architecture? - validation

Good day,
I´m taking over a guy who started defining the architecture of a HW/SW communication tool. I´m in charge now to validate his progress so far, before writing all the specifications and implement it.
How can I formally proceed? Is there a methodology to validate a HW/SW architecture against intial top-level requirements?
Cheers,
Funky24

In general, your architecture is "valid" if it meets the set of top-level requirements that are architecturally significant to your stakeholders.
With that I mean that:
It's better not to assume that your list of requirements is complete or that they have been written correctly.
You need to make a selection of the most important requirements and focus on those.
In general the list from point 2 will generally include a majority of non-functional requirements (performance, usability, etc). Functional requirements are of course important, but they don't drive the architecture of a system.
A good formal methodology for architecture evaluation is ATAM (for Architecture tradeoff analysis method) from SEI. You could have a look at their technical report and tailor it to your needs and to the size of the project. Since you have not designed the design, you may end up doing some reverse-engineering to figure out why certain choices were made.

Related

Have any of you applied TDD in VHDL?

I applied TDD for software development, and it was really good! Now, I'm keen on FPGA design with VHDL and I am wondering about how to apply TDD methodology with it.
Have any of you used TDD on FPGA design? If yes, how do yo do? Do you know any articles or materials to learn about?
Thanks!
I have never specifically applied TDD, but I always have testbenches for the design components in VHDL projects.
Although it is not entirely within the philosophy of TDD, a lot of hardware design end up resembling some of its description. In fact, I would argue that hardware is often much more suitable for testing with fine granularity, because components often have clearly defined inputs and outputs where software units would be depending on many other classes for the functionality to make sense.
The resemblance is strong where all sub-blocks are being properly verified, especially if you consider the blocks to be 'features'. Even more so, when using stimuli files for testbenches on small arithmetic blocks, it is normal to first start with some common and special cases you can think of, and then add more as you find problems in your high-level tests. You will end up working iteratively on a fairly low level.
Some concepts of TDD that don't map well to hardware design include the idea of 'user stories'. Unless if you have specifically designed your project to be extensible, it is more likely you won't see any top level tests passing until all your components are in place.
I don't have any specific links or materials for you. Usually there are many custom scripts involved for running all the tests. Apart from testbenches it often involves checks with assertions, equivalence checks and various forms of formal verification, but doing that in great detail may be more specific to ASIC than FPGA design. The VUnit mentioned in the comment by Briand Drummond looks very interesting.
Note that hardware engineers tend to call checks on VHDL code verification, where testing is what you do on a physical product. I've had one professor that was very passionate about the distinction.

Can TDD Handle Complex Projects without an upfront design?

The idea of TDD is great, but i'm trying to wrap my head around how to implement a complex system if a design is not proposed upfront.
For example, let's say I have multiple services for an payment processing application. I'm not sure I understand how development would/can proceed across multiple developers if there is not a somewhat solid design upfront.
It would be great if someone can provide an example and high level steps to putting together a system in this manner. I can see how TDD can lead to simpler and more robust code, I'm just not sure how it can bring together 1) different developers to a common architectural vision and 2) result in a system that can abstract out behavior in order to prevent having to refactor large chunks of code (e.g. accept different payment methods or pricing models based on a long term development roadmap).
I see the refactoring as a huge overhead in a production system where data model changes increase risks for customers and the company.
Clearly i'm probably missing something that TDD gurus have discovered....
IMHO, It depends on the the team's composition and appetite for risk.
If the team consists of several experienced and good designers, you need a less formal 'architecture' phase. It could be just a back of the napkin doodle or a a couple of hours on the whiteboard followed by furious coding to prove the idea. If the team is distributed and/or contains lots of less skilled designers, you'd need to put more time/effort (thinking and documenting) in the design phase before everyone goes off on their own path
The next item that I can think of is to be risk first. Continually assess what are the risks to your project, calculate your exposure/impact and have mitigation plans. Focus on risky and difficult to reverse decisions first. If the decision is easily reversible, spend less time on it.
Skilled designers are able to evolve the architecture in tiny steps... if you have them, you can tone down the rigor in an explicit design phase
TDD can necessitate some upfront design but definitely not big design upfront. because no matter how perfect you think your design is before you start writing code, most of the time it won't pass the reality check TDD forces on it and will blow up to pieces halfway through your TDD session (or your code will blow up if you absolutely want to bend it to your original plan).
The great force of TDD is precisely that it lets your design emerge and refine as you write tests and refactor. Therefore you should start small and simple, making the least assumptions possible about the details beforehand.
Practically, what you can do is sketch out a couple of UML diagrams with your pair (or the whole team if you really need a consensus on the big picture of what you're going to write) and use these diagrams as a starting point for your tests. But get rid of these models as soon as you've written your first few tests, because they would do more harm than good, misleading you to stick to a vision that is no longer true.
First of all, I don't claim to be a TDD guru, but here are some thoughts based on the information in your question.
My thoughts on #1 above: As you have mentioned, you need to have an architectural design up-front - I can't think of a methodology that can be successful without this. The architecture provides your team with the cohesion and vision. You may want to do just-enough-design up front, but that depends on how agile you want to be. The team of developers needs to know how they are going to put together the various components of the system before they start coding, otherwise it will just be one big hackfest.
It would be great if someone can provide an example and high level
steps to putting together a system in this manner
If you are putting together a system that is composed of services, then I would start by defining the service interfaces and any messages that they will exchange. This defines how the various components of your system will interact (this would be an example of your up-front design). Once you have this, you can allocate various development resources to build the services in parallel.
As for #2; one of the advantages of TDD is that it presents you with a "safety net" during refactoring. Since your code is covered with unit tests, when you come to change some code, you will know pretty soon if you have broken something, especially if you are running continuous integration (which most people do with a TDD approach). In this case you either need to adapt your unit tests to cover the new behavior OR fix your code so that your unit tests pass.
result in a system that can abstract out behavior in order to prevent
having to refactor large chunks of code
This is just down to your design, using e.g. a strategy pattern to allow you to abstract and replace behavior. TDD does not prescribe that your design has to suffer. It just asks that you only do what is required to satisfy some functional requirement. If the requirement is that the system must be able to adapt to new payment methods or pricing models, then that is then a point of your design. TDD, if done correctly, will make sure that you are satisfying your requirements and that your design is on the right lines.
I see the refactoring as a huge overhead in a production system where
data model changes increase risks for customers and the company.
One of the problems of software design is that it is a wicked problem which means that refactoring is pretty much inevitable. Yes, refactoring is risky in production systems, but you can mitigate that risk and TDD will help you. You also need to have a supple design and a system with low coupling. TDD will help reduce your coupling since you are designing your code to be testable. And one of the by-products of writing testable code is that you reduce your dependencies on other parts of the system; you tend to code to interfaces which allows you to replace an implementation with a mock or stub. A good example of this is replacing a call to a database with a mock/stub that returns some known data - you don't want to hit a database in your unit tests. I guess I can mention here that a good mocking framework is invaluable with a TDD approach (Rhino mocks and Moq are both open source).
I am sure there are some real TDD gurus out there who can give you some pearls of wisdom...Personally, I wouldn't consider starting a new project with out a TDD approach.

Experiences with Test Driven Development (TDD) for logic (chip) design in Verilog or VHDL

I have looked on the web and the discussions/examples appear to be for traditional software development. Since Verilog and VHDL (used for chip design, e.g. FPGAs and ASICs) are similar to software development C and C++ it would appear to make sense. However they have some differences being fundamentally parallel and requiring hardware to fully tests.
What experiences, good and bad, have you had? Any links you can suggest on this specific application?
Edits/clarifications:
10/28/09: I'm particularly asking about TDD. I'm familiar with doing test benches, including self-checking ones. I'm also aware that SystemVerilog has some particular features for test benches.
10/28/09: The questions implied include 1) writing a test for any functionality, never using waveforms for simulation and 2) writing test/testbenches first.
11/29/09: In Empirical Studies Show Test Driven Development Improves Quality they report for (software) TDD "The pre-release defect density of the four products, measured as defects per thousand lines of code, decreased between 40% and 90% relative to the projects that did not use TDD. The teams' management reported subjectively a 15–35% increase in initial development time for the teams using TDD, though the teams agreed that this was offset by reduced maintenance costs." The reduced bugs reduces risk for tape-out, at the expense of moderate schedule impact. This also has some data.
11/29/09: I'm mainly doing control and datapath code, not DSP code. For DSP, the typical solution involves a Matlab bit-accurate simulation.
03/02/10: The advantage of TDD is you make sure the test fails first. I suppose this could be done with assertions too.
I write code for FPGAs, not ASICS... but TDD is my still my preferred approach. I like to have a full suite of tests for all the functional code I write, and I try (not always successfully) to write testcode first. Staring at waveforms always happens at some point when you're debugging, but it's not a good way of validating your code (IMHO).
Given the difficulty of performing proper tests in the real hardware (stimulating corner cases is particularly hard) and the fact that a VHDL-compile takes seconds (vs a "to hardware" compile that takes many minutes (or even hours)), I don't see how anyone can operate any other way!
I also build assertions into the RTL as I write it to catch things I know shouldn't ever happen. Apparantly this is seen as a bit "weird", as there's a perception that verification engineers write assertions and RTL designers don't. But mostly I'm my own verification engineer, so maybe that's why!
I use VUnit for test driven development with VHDL.
VUnit is a Python library that invokes the VHDL compiler and simulator and reads the results of the simulation. It also provides several nice VHDL libraries that makes it a lot easier to write better test benches, such as a communication library, logging library and a checking library.
There are many possibilities since it is invoked from Python. It is possible to both generate test data, as well as check the output data from the test in Python. I saw this example the other day where they used Octave - a Matlab copy - for plotting test results.
VUnit seems very active and I have several times been able to actually ask questions directly to the developers and gotten help quite quickly.
A downside is that it is harder to debug compilation errors since there are so many function/procedure variations with the same name in the libraries. Also, some stuff is done behind the scene by preprocessing the code, which means that some errors might show up in unexpected places.
I don't know a lot about hardware/chip design, but I am deeply into TDD, so I can at least discuss suitability of the process with you.
The question I'd call most pertinent is: How quickly can your tests give you feedback on a design? Related to that: How quickly can you add new tests? And how well do your tools support refactoring (changing structure without changing behavior) of your design?
The TDD process depends a great deal on the "softness" of software - good automated unit tests run in seconds (minutes at the outside), and guide short bursts of focused construction and refactoring. Do your tools support this kind of workflow - rapidly cycling between writing and running tests and building the system under test in short iterations?
The SystemVerilog extensions to the IEEE Verilog Standard include
a variety of constructs which facilitate creating thorough test suites
for verifying complex digital logic designs. SystemVerilog is one of
the Hardware Verification Languages (HVL) which is used to verify ASIC chip
designs via simulation (as opposed to emulation or using FPGA's).
Significant benefits over a traditional Hardware Design Language (Verilog) are:
constrained randomization
assertions
automatic collection of functional and assertion coverage data
The key is to have access to simulation software which supports
this recent (2005) standard. Not all simulators fully support
the more advanced features.
In addition to the IEEE standard, there is an open-source SystemVerilog library
of verification components available from VMM Central (http://www.vmmcentral.com). It provides a reasonable framework for creating a test environment.
SystemVerilog is not the only HVL,and VMM is not the only library.
But, I would recommend both, if you have access to the appropriate
tools. I have found this to be an effective methodology in finding design
bugs before becoming silicon.
With regard to refactoring tools for hardware languages, I'd like to point you to our tool Sigasi HDT. Sigasi provides an IDE with built-in VHDL analyzer and VHDL refactorings.
Philippe Faes, Sigasi
ANVIL– ANother Verilog Interaction Layer talks about this some. I haven't tried it.
I never actively tried TDD on an RTL design, but I had my thoughts on this.
What I think would be interesting is to try out this approach in connection with assertions. You would basically first write down in form of assertions what you assume/expect from your module, write your RTL and later you can verify these assertions using formal tools and/or simulation. In contrast to "normal" testcases (where you probably would need to write directed ones) you should have much better coverage and the assertions/assumptions may be of use later (e.g. on system level) as well.
However I wouldn't fully rely on assertions, this can become very hairy.
Maybe you can express your thoughts on this as well, as you are asking for it I guess you carry some ideas in your head?
What is TDD for you? Do you mean having all your code exercised by automatic tests at all times, or do you go further to mean that tests are written before the code and no new code is written unless tests fail?
Whichever approach you prefer, HDL code testing isn't very different from software testing. It has its pluses (much better coverage and depth of testing) and minuses (difficult to set up and cumbersome relatively to software).
I've had very good experience with employing Python and generic HDL transactors for implementing comprehensive and automatic tests for synthesizable HDL modules. The idea is somewhat similar to what Janick Bergeron presents in his books, but instead of SystemVerilog, Python is used to (1) generate VHDL code from test scenarios written in Python and (2) verification of results written by the monitoring transactors that accept waveforms from the design during simulation.
There's much more to be written about this technique, but I'm not sure what you want to focus on.

Justification for using non-portable code

How does one choose if someone justify their design tradeoffs in terms of optimised code, clarity of implementation, efficiency, and portability?
A relevant example for the purpose of this question could be large file handling, where a "large file" is "quite a few GB" for a problem that would be simplified using random-access methods.
Approaches for reading and modifying this file could be:
Use streams anyway, and seek to the desired place - which is portable, but potentially slow, and is not clear - this will work for practically all OS's.
map the relevant portion of the file as a large block. Eg, mmap a 50MB chunk of the file for processing, for each chunk - This would work for many OS's, depending on the subtleties of implementing mmap for that system.
Just mmap the entire file - this requires a 64-bit OS and is the most efficient and clear way to implement this, however does not work on 32-bit OS's.
Not sure what you're asking, but part of the design process is to analyze requirements for portability and performance (amongst other factors).
If you know you'll never need to port the code, and you need absolutely the best performance, then you adjust your implementation accordingly. There's no point being portable just for its own sake.
Note also that if you want both performance and portability, there's nothing stopping you from providing an implementation for each platform. Of course this will increase your cost, so really, its up to you to prioritize your needs.
Without constraints, this question rationally cannot be answered rationally.
You're asking "what is the best color" without telling us whether you're painting a house or a car or a picture.
Constraints would include at least
Language of choice
Target platforms (multi CPU industrial-grade server or iPhone?)
Optimizing for speed vs. memory
Cost (who's funding this and is there a delivery constraint?)
No piece of software could have "ultimate" portability.
An example of this sort of problem being handled using a variety of methods but with a tight constraint both on the specific input/output required and the measurement of "best" would be the WideFinder project.
Basically, you need think first before coding. Every project is unique and an analysis of the needs could help decide what is primordial for it. What will make the best solution for any project depends on a few things...
First of all, will this project need to be or eventually be multiplatform? Depending on your choice, choosing the right programming language should be easier. Then again you could also use more than one language in your project and this is completely normal. Portability does not necessarily mean less performance. All it implies is that it involves harder work to achieve your goals because you will need quality code. Also, every programming language has its own philosophy. Learn what they are. One thing is for sure, certain problems frequently come back over and over. This is why knowing the different design patters can make a difference sometimes, but some languages have their own idioms and can be very relevant when choosing a language. Another thing that needs some thought is the different approaches that you can have for your project. Multithreading, sockets, client/server systems and many other technologies are all there for you to use. Choosing the right technology can help to make a project better.
Knowing the needs and the different solutions available today is what will help decide when comes the time to choose for the different tradeoffs.
It really depends on the drivers for the project. If you are doing in-house enterprise dev, then do the simplest thing that could work on your target hardare. Mod for performance reqs as needed.
If you know you need to support different hardware platforms on day 1, then you'll clearly need to choose a portable implementation, or use multiple approaches.
Portability for portability's sake has been a marketing spiel for Java since inception and is a fact of life for C by convention, and I believe most people who abide by it "grew up" with Java or C will say that.
However true, absolute portability will only be true for the most trivial to at most applications with medium complexity -- anything with high complexity will need specialized tweaks.

What is your experience with software model checking? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What types of applications have you used model checking for?
What model checking tool did you use?
How would you summarize your experience w/ the technique, specifically in evaluating its effectiveness in delivering higher quality software?
In the course of my studies, I had a chance to use Spin, and it aroused my curiosity as to how much actual model checking is going on and how much value are organizations getting out of it. In my work experience, I've worked on business applications, where there is (naturally) no consideration of applying formal verification to the logic. I'd really like to learn about SO folks model checking experience and thoughts on the subject. Will model checking ever become a more widely used developing practice that we should have in our toolkit?
I just finished a class on model checking and the big tools we used were Spin and SMV. We ended up using them to check properties on common synchronization problems, and I found SMV just a little bit easier to use.
Although these tools were fun to use, I think they really shine when you combine them with something that dynamically enforces constraints on your program (so that it's a bit easier to verify 'useful' things about your program). We ended up taking the Spring WebFlow framework, which uses XML to write a state-machine like file that specifies which web pages can transition to which other ones, and using SMV to be able to perform verification on said applications (shameless plug here).
To answer your last question, I think model checking is definitely useful to have, but I lean more towards using unit testing as a technique that makes me feel comfortable about delivering my final product.
We have used several model checkers in teaching, systems design, and systems development. Our toolbox includes SPIN, UPPAL, Java Pathfinder, PVS, and Bogor. Each has its strengths and weaknesses. All find problems with models that are simply impossible for human beings to discover. Their usability varies, though most are pushbutton automated.
When to use a model checker? I'd say any time you are describing a model that must have (or not have) particular properties and it is any larger than a handful of concepts. Anyone who thinks that they can describe and understand anything larger or more complex is fooling themselves.
What types of applications have you used model checking for?
We used the Java Path Finder model checker to verify some security (deadlock, race condition) and temporal properties (using Linear temporal logic to specify them). It supports classical assertions (like NotNull) on Java (bytecode) - it is for program model checking.
What model checking tool did you use?
We used Java Path Finder (for academic purposes). It's open source software developed by NASA initially.
How would you summarize your experience w/ the technique, specifically in evaluating its effectiveness in delivering higher quality software?
Program model checking has a major problem with state space explosion (memory & disk usage). But there are a wide variety of techniques to reduce the problems, to handle large artifacts, such as partial order reduction, abstraction, symmetry reduction, etc.
I used SPIN to find a concurrency issue in PLC software. It found an unsuspected race condition that would have been very tough to find by inspection or testing.
By the way, is there a "SPIN for Dummies" book? I had to learn it out of "The SPIN Model Checker" book and various on-line tutorials.
I've done some research on that subject during my time at the university, expanding the State Exploring Assembly Model Checker.
We used a virtual machine to walk each and every possible path/state of the program, using A* and some heuristic, depending on the kind of error (deadlock, I/O errors, ...)
It was inspired by Java Pathfinder and it worked with C++ code. (Everything GCC could compile)
But in our experiences this kind of technology will not be used in business applications soon, because of GUI related problems, the work necessary for creating an initial test environment and the enormous hardware requirements. (You need lots of RAM and disc space, because of the gigantic state space)

Resources