What is the difference between Requirements and FUR (Functional User Requirements) in COSMIC? - measure

The concept of Functional User Requirements (as defined below) and the concept of Functional Requirements are, essentially, the same or not?
Definition of Requirement, Object-Oriented Software Engineering Using UML, Patterns, and Java, Bruegge et al. (3rd edition)
Requirements specify a set of features that the system must have. A
functional requirement is a specification of a function that the system must support, whereas a nonfunctional requirement is a constraint on the operation of the system that is not related directly to a function of the system.
Definition of FUR (Functional User Requirements),COSMIC Measurement Manual for ISO 19761 (2021)
Functional User Requirements: sub-set of the user requirements
describing what the software shall do, in terms of tasks and services.
Such a doubt arose because in the COSMIC Measurement Manual is reported the linked image, in which requirements are the input of the "Measurement Strategy" Phase, which outputs FURs, so I think that there is a difference that I cannot see...
COSMIC measurement process diagram

Related

Difference between standard and non-standard diagrams in Software engineering

I am writing a documentation for my Software engineering subject. My project is on a Hospital Managements System. Here is the question that is making me confused.
(2. Architectural design) Present the overall software architecture, stating whether it’s Layered,
Repository, Client-Server, or Pipe and Filter architecture( – skim through pages 155 to 164 of our text
reference book to see descriptions of these different architectures). Describe and present it on a standard or non-standard diagram.
So what is the difference between standard and non-standard diagram?
The question is indeed confusing, since it presents architectural models as if they were mutually exclusive (i.e. it can be at the same time layered and client-server) and relies on ambiguous terminology.
When it comes to architectural diagrams, there are standard diagrams, which follow a well known formal graphical notation. Typical examples are:
UML
Older OO notation (e.g. Booch, Rumbaugh or Objectory - it's really old because these have been merged together to make UML).
Non OO notations, such for example the IDEF suite (which was enriched in the meantime with an OO layer), SADT, Gane & Sarson (it's also quite old, less and less used, except in some niche markets).
Among those, the only which qualifies officially and unambiguously as a standard is UML: it's the only one that is recognized by an international standard setting body (ISO/IEC 19505).
But in architecture you have also a fair bunch of non-standard diagrams that convey graphically the structural intent. Typically, a layered arrangement of services, or an hexagonal or a concentric presentations are frequently used. Sometimes it's even more visual with clients shown as PC, and several servers in the network. All these use non-standard notations.

Is static analysis really formal verification?

I have been reading about formal verification and the basic point is that it requires a formal specification and model to work with. However, many sources classify static analysis as a formal verification technique, some mention abstract intepretation and mention its use in compilers.
So I am confused - how can these be formal verification if there is no formal description of the model?
EDIT: A source I found reads:
Static analysis: the abstract semantics is computed automatically from
the program text according to predefined abstractions (that can
sometimes be tailored automatically/manually by the user)
So does it mean it works just on the source code with no need for formal specification? This would be what static analysers do.
Also, is static analysis possible without formal verification? E.g. does SonarQube really perform formal methods?
In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
How can these be formal verification if there is no formal description of the model?
A static analyser will generate control/data flow of a piece of code, upon which formal methods can then be applied to verify conformance to the system's/unit's expected design model.
Note that modelling/formal-specification is NOT a part of static-analysis.
However combined together, both of these tools are useful in formal verification.
For example if a system is modeled as a Finite State Machine (FSM) with
a pre-defined number of states
defined by a combination of specific values of certain member data.
a pre-defined set of transitions between various states
defined by the list of member functions.
Then the results of static analysis will help in formal verification of the fact that
the control NEVER flows along a path that is NOT present in the above FSM model.
Also, if a model can be simply defined in terms of type-definition, data-flow, control-flow/call-graph, i.e. code-metrics that a static-analyser can verify, then static-analysis itself is sufficient to formally verify that code conforms to such a model.
NOTE1. The yellow region above would be static analysers used to enforce stuff like coding-guidelines and naming-conventions i.e. aspects of code that cannot affect the program's behavior.
NOTE2. The red region above would be formal verification that requires additional steps like 100% dynamic code-coverage, elimination of unused and dead code. These cannot be detected/enforced using a static-analyser.
Static analysis is highly effective in verifying that a system/unit is implemented using a subset of the language specification to meet goals laid out in the system/unit design.
For example, if it is a design goal to prevent the stack memory from exceeding a particular limit, then one could apply a limit on the depth of recursion (or forbid recursive functions calls altogether). Static-analysis is used to identify such violations of design goals.
In the absence of any warnings from the static-analyser,
the system/unit code stands formally verified against such design-goals of its respective model.
eg. MISRA-C standard for Automotive software defines a subset of C for use in automotive systems.
MISRA-C:2012 contains
143 rules - each of which is checkable using static program analysis.
16 "directives" more open to interpretation, or relate to process.
Static analysis just means "read the source code and possibly complain". (Contrast to "dynamic analysis", meaning, "run the program and possibly complain about some execution behavior").
There are lots of different types of possible static-analysis complaints.
One possible complaint might be,
Your source code does not provably satisfy a formal specification
This complaint would be based on formal verification if the static analyzer had a formal specification which it interpreted "formally", a formal interpretation of the source code, and a trusted theorem prover that could not find an appropriate theorem.
All the other kinds of complaints you might get from a static analyzer are pretty much heuristic opinions, that is, they are based on some informal interpretation of the code (or specification if it indeed even exists).
The "heavy duty" static analyzers such as Coverity etc. have pretty good program models, but they don't tell you that your code meets a specification (they don't even look to see if you have one). At best they only tell you that your code does something undefined according to the language ("dereference a null pointer") and even that complaint isn't always right.
So-called "style checkers" such as MISRA are also static analyzers, but their complaints are essentially "You used a construct that some committee decided was bad form". That's not actually a bug, it is pure opinion.
You can certainly classify static analysis as a kind of formal verification.
how can these be formal verification if there is no formal description of the model?
For static analysis tools, the model is implicit (or in some tools, partly implicit). For example, "a well-formed C++ program will not leak memory, and will not access memory that hasn't been initialized". These sorts of rules can be derived from the language specification, or from the coding standards of a particular project.

General: how to validate a proposed system architecture?

Good day,
I´m taking over a guy who started defining the architecture of a HW/SW communication tool. I´m in charge now to validate his progress so far, before writing all the specifications and implement it.
How can I formally proceed? Is there a methodology to validate a HW/SW architecture against intial top-level requirements?
Cheers,
Funky24
In general, your architecture is "valid" if it meets the set of top-level requirements that are architecturally significant to your stakeholders.
With that I mean that:
It's better not to assume that your list of requirements is complete or that they have been written correctly.
You need to make a selection of the most important requirements and focus on those.
In general the list from point 2 will generally include a majority of non-functional requirements (performance, usability, etc). Functional requirements are of course important, but they don't drive the architecture of a system.
A good formal methodology for architecture evaluation is ATAM (for Architecture tradeoff analysis method) from SEI. You could have a look at their technical report and tailor it to your needs and to the size of the project. Since you have not designed the design, you may end up doing some reverse-engineering to figure out why certain choices were made.

Missing requirements in Joel's Functional Spec

I assume most people have read the Painless Functional Specification articles by Joel. In part two, What's a Spec?, a sample spec is provided. However there is no mention of requirements. I have two questions:
How do requirements fit into the sample functional spec? I assume the requirements must be known before a functional spec can be written. So they can't be part of the functional spec, but where are they recorded?
How does test driven development (TDD) fit into the whole func spec / tech spec split Joel outlines (below):
A functional specification describes how a product will work entirely
from the user's perspective. It doesn't care how the thing is
implemented. It talks about features. It specifies screens, menus,
dialogs, and so on.
A technical specification describes the internal implementation of the
program. It talks about data structures, relational database models,
choice of programming languages and tools, algorithms, etc.
Functional design
This is the WHAT.
What are you designing? What will users do with it? What value will it provide them?
The functional spec is the requirements. Each operation the various users perform (create account, log in, view time) is a requirement of the system.
You have to go deeper, though, and ask yourself, "what happens if Mike can't remember his password?" "What does 'exciting' mean to Cindy?" etc. (This is why Joel notes it isn't a complete spec—it is missing many details.)
TDD
Test driven design is the HOW
How do the classes, methods, etc. work? How are errors handled? How does data flow through the code?

SonarQube Category Explanations

Can anybody suggest a one/two line explanation of the "five" SonarQube categories, in such a way that a non-developer can understand what the percentage figure means?
Efficiency
Maintainability
Portability
Reliability
Usability
One word "synonym" for non-developers (not exact synonym though, but enough to give a quick idea):
Efficiency : performance
Maintainability : evolution
Portability : reuse
Reliability : resilience
Usability : design
Most of those metrics are presented in this Wikipedia entry
Efficiency:
Efficiency IT metrics measure the performance of an IT system.
An effective IT metrics program should measure many aspects of performance including throughput, speed, and availability of the system.
Maintainability
.
is the ease with which a product can be maintained in order to:
correct defects
meet new requirements
make future maintenance easier, or
cope with a changed environment
.
Portability:
the software codebase feature to be able to reuse the existing code instead of creating new code when moving software from an environment to another.
Reliability:
The IEEE defines reliability as "The ability of a system or component to perform its required functions under stated conditions for a specified period of time."
Note from this paper:
To most project and software development managers, reliability is equated to correctness, that is, they look to testing and the number of "bugs" found and fixed.
While finding and fixing bugs discovered in testing is necessary to assure reliability, a better way is to develop a robust, high quality product through all of the stages of the software lifecycle.
That is, the reliability of the delivered code is related to the quality of all of the processes and products of software development; the requirements documentation, the code, test plans, and testing.
Usability
studies the elegance and clarity with which the interaction with a computer program or a web site (web usability) is designed.
Usability differs from user satisfaction insofar as the former also embraces usefulness (see Computer user satisfaction).
See for instance usabilitymetrics.com
This represents for each category the density of violations (non-respect) of rules in the source code.

Resources