How to understand the meaning of high level requirement and low level requirement in DO-178C? - safety-critical

DO-178C or DO-178B requires two level software requirements, that is, high-level requirements and low-level requirements.
But generally except the very small software, the hierarchy structure of most embedded software is: (the whole) embedded software -> component -> unit.
hierarchy structure of embedded software
If we develop software requirement for each level, then it will be:
(1) embedded software level requirement: normally this level requirement will be "The embedded software shall xxxx"
(2) component level requirement: normally this level requirement will be "The compoent A shall xxxx"
(3) unit level requirement: normally this level requirement will be "The unit A.1 shall xxxx"
embedded software level requirement is a typical functional requirement which define "what", while both component level requirement and unit level requirement are "design requirement" which define "how".
Does the high level requirement of DO-178C covers both embedded software level requirements and component level requirements, and does DO-178C merge these two level requirements into one level?
If they are merged into one level, I feel that there will be some issue because one high level requirement may not be testable in the hardware/software integration testing because that requirement actually define the requirement for one component, and the internal information of that component are not able to be observed during the hardware/software integration testing.
If there are three level software requirements, the tracibiltiy between requiremets and testings is easy for me to understand.
tracability between three level requirements and tests
But if there are only two level requirements, does it mean the traceability will be like as below?
tracability between two level requirements and tests

Related

What is the difference between Requirements and FUR (Functional User Requirements) in COSMIC?

The concept of Functional User Requirements (as defined below) and the concept of Functional Requirements are, essentially, the same or not?
Definition of Requirement, Object-Oriented Software Engineering Using UML, Patterns, and Java, Bruegge et al. (3rd edition)
Requirements specify a set of features that the system must have. A
functional requirement is a specification of a function that the system must support, whereas a nonfunctional requirement is a constraint on the operation of the system that is not related directly to a function of the system.
Definition of FUR (Functional User Requirements),COSMIC Measurement Manual for ISO 19761 (2021)
Functional User Requirements: sub-set of the user requirements
describing what the software shall do, in terms of tasks and services.
Such a doubt arose because in the COSMIC Measurement Manual is reported the linked image, in which requirements are the input of the "Measurement Strategy" Phase, which outputs FURs, so I think that there is a difference that I cannot see...
COSMIC measurement process diagram

Interview assignment - design a system like S3

In my phone interview at one of the financial firms as an software architect, "design a cloud storage system like AWS S3".
Here is what I answered, Would you please help with your critiques & comments and on my approach. I would like to improve based on your feedback.
First
, I listed requirements
- CRUD Microservices on objects
- Caching layer to improve performance
- Deployment on PaaS
- resiliency with failover
- AAA support ( authorization, auditing, accounting/billing)
- Administration microservices (user, project, lifecycle of object, SLA dashboard)
- Metrics collection (Ops, Dev)
- Security for service endpoints for admin UI
Second,
I defined basic APIs.
https://api.service.com/services/get Arugments object id, metadata return binary object
https://api.service.com/services/upload Arguments object returns object id
https://api.service.com/services/delete Arugments object id returns success/error
http://api.service.com/service/update-meta Arugments object id, metadata return success/error
Third,
I drew the picture on board with architecture and some COTS components i can use. below is the picture.
Interviewer did not ask me much questions, and hence I am bit worried that if I am on right track with my process. Pl provide your feedback..
Thanks in advance..
There are a couple of areas of feedback that might be helpful:
1. Comparison with S3's API
The S3 API is a RESTful API these days (it used to support SOAP) and it represents each 'file' (really a blob of data indexed by a key) as an HTTP resource, where the key is the path in the resource's URI. Your API is more RPC, in that each HTTP resource represents an operation to be carried out and the key to the blob is one of the parameters.
Whether or not this is a good or bad thing depends on what you're trying to achieve and what architectural style you want to adopt (although I am a fan of REST, it doesn't mean you have to adopt it for all applications), however since you were asked to design a system like S3, your answer would have benefited from a clear argument as to why you chose NOT to use REST as S3 does.
2. Lines connecting things
Architecture diagrams tend to often be very high level - which is appropriate - but there is a tendency sometimes to just draw lines between boxes without being clear about what those lines mean. Does it mean there is a network connection between the infrastructure hosting those software components? Does it mean there is an information or data flow between those components?
When you a draw a line like in your diagram that has multiple boxes all joining together on the line, the implication is that there is some relationship between the boxes. When you add arrows, there is the further implication that the relationship follows the direction of the arrows. But there is no clarity about what that relationship is, or why the directionality is important.
One could infer from your diagram that the Memcache Cluster and the File Storage cluster are both sending data to the Metrics/SLA portal, but that they are not sending data to each other. Or that the ELB is not connected to the microservices. Clearly that is not the case.
3. Mixing Physical, Logical, Network & Software Architecture
General Type of Architecture
Logical Architecture - tends to be more focussed on information flows between areas of functional responsibility
Physical Architecture - tends to be more focussed on deployable components, such as servers, VMs, containers, but I also group installable software packages here, as a running executable process may host multiple elements from the logical architecture
Specific Types of Architecture
Network Architecture - focuses on network connectivity between machines and devices - may reference VLANs, IP ranges, switches, routers etc.
Software Architecture - focuses on the internal structures of a software program design - may talk about classes, modules, packages etc.
Your diagram includes a Load Balancer (more physical) and also a separate box per microservice (could be physical or logical or software), where each microservice is responsible for a different type of operation. It is not clear if each microservice has it's own load balancer, or if the load balancer is a layer 7 balancer that can map paths to different front ends.
4. Missing Context
While architectures often focus on the internal structure of a system, it is also important to consider the system context - i.e. what are the important elements outside the system that the system needs to interract with? e.g. what are the expected clients and their methods of connectivity?
5. Actual Architectural Design
While the above feedback focussed on the method of communicating your, this is more about the actual design.
COTS products - did you talk about alternatives and why you selected the one you chose? Or is it just the only one you know. Awareness of the options and ability to select the appropriate option for a given purpose is valuable.
Caching - you have caching in front of the file storage, but nothing in front of the microservices (edge cache, or front end reverse proxy) - assuming the microservices are adding some value to the process, caching their results might also be useful
Redundancy and durability of data - while you talk about resiliency to failover, data redundancy and durability of the data storage is a key requirement in something like this and some explicit reference to how that would be achieved would be useful. Note this is slightly different to availability of services.
Performance - you talk about introducing a caching layer to improve performance, but don't qualify the actual performance requirements - 100's of objects stored or retrieved per second, 1000's or millions? You need to know that to know what to build in
Global Access - S3 is a multi-region/multi-datacentre solution - your architecture does not reference any aspect of multi-datacentre such as replication of the stored objects and metadata
Security - you reference requirements around AAA but your proposed solution doesn't define which component is responsible for security, and at which layer or at what point in the request path a request is verified and accepted or rejected
6. The Good
Lest this critique be thought too negative, it's worth saying that there is a lot to like in your approach - your assessment of the likely requirements is thorough, and great to see inclusion of security and also operational monitoring and sla's considered up front.
However, reviewing this, I'd wonder what kind of job it actually was - it looks more like the application for a cloud architect role, rather than a software architect role, for which I'd expect to see more discussion of packages, modules, assemblies, libraries and software components.
All of the above notwithstanding, it's also worth considering - what is an interviewer looking for if they ask this in an interview? Nobody expects you to propose an architecture in 15 minutes that can do what has taken a team of Amazon engineers and architects many years to build and refine! They are looking for clarity of thought and expression, thoroughness of examination, logical conclusions from clearly stated assumptions, and knowledge and awareness of industry standards and practices.
Hope this is helpful, and best of luck on the job hunt!

Where to start for Tech Specs of an IT project.. more specifically functional specifications

Hi I'm wonderingif someone knows some good resources on writing project specs.
I'm a freelance developer more focused on actual development and less focused
on tech specs.
I'm involved in a proj where have to write technical
specs (& functional req specs) and have absolutely no idea where to
start.. any good site or sample or book you would advise ?
Strongly recommend User Stories Applied by Mike Cohn
Any non-trivial project requires Tech Spec. By non-trivial I mean more than about 1 week of coding or more than 1 programmer.
Although there no much resources online which may help to realize how to make good Tech Specs. So Let me share my vision in this field.
Every spec should contain:
List item
Title
Overview (general words what about the project is)
Operational purpose (what for the project is, the goal)
Functional purpose (the ways and technical methods/resources attained for fulfilling Operational purpose)
Definitions (to avoid polysemy and for clarification purposes)
DATA AND LISTS (the most important and the biggest part of Tech Spec. The section where described data structures, relational database models, choice of programming languages and tools, algorithms, etc)
Wireframes and pages descriptions
Technical requirements (hosting conditions, system requirements, etc)
Сommissioning and acceptance conditions (all criteria which make your job completed)

Rules for SIL allocation for tasks in Safety-critical applications and partition sharing

Considering a safety-critical application, composed of several tasks, I have the following question:
Is it possible to have tasks of different SILs in an application, or are all tasks the same SIL? I know that in HW it is possible to have a system of a certain SIL actually composed of subcomponents of different SIL. IEC 61508-2, sec 7.4.3 presents the rules to combine subsystems of different SILs to form a system of a greater SIL than the composing parts.
If it is possible, what are the rules to combine? References are very helpful.
For example, can a task of SIL 2 be the input for a task of SIL 3?
Thanks and good luck,
Yes it is possible. I recommend reading part 3 of latest version of IEC 61508 (IEC 61508-3:2010) Appendix F, ‘Techniques for achieving non-interference between software elements on a single computer”, it’s only 5 pages, but very informative. It outlines methods for achieving spatial and temporal independence of software modules with differing SIL levels.
As said earlier in this link, operating systems such as PikeOS and Vxworks should provide this partitioning; I do know SafeRTOS, which has been certified to IEC 61508, does provide this type of partitioning as standard.
You should look at systems based on ARINC 653 (and DO-297) or equivalent. Partition-based OSes are designed to answered this kind of need. I mean PikeOS, VxWorks, Integrity ...
As I said : ARINC 653 compliant RTOS (for aircraft) is exactly targeted towards this goal. DO-178B (the equivalent to IEC 61508 or ISO 26262 or Def-Stan 55/56) require a segmentation in space and time between partitions or different software assurance level (for you, SIL level). You may find equivalent systems for your specific market.
For linking different levels, there are inherent difficulties from the low level layers and communication channel. You will have to prove the determinism of your system at the higher level of security/safety/reliability (meaning the most difficult to obtain). Thus, communication could not be blocking, RTOS have to be certified to the higher level, ... This is taken into account in partition based RTOS, like ARINC 653 equivalent.
You may also have success with MILS Linux or virtualized systems (= hypervisors like XEN, OKL kernels)
You can combine SW modules with different SIL level, even if the Indipendent Safety Assessor will analyse you code deeply. The principle is simple: you have to demonstrate that a lower SIL module can't influence a greater SIL module. To achieve this, you have to keep in mind that a lower- SIL-function can call a greater-SIL-function, but the opposite must be strictly avoided.
In this scenario, to exchange data between two modules with different SIL level, you need a third module with a SIL level equal to the higher that provides to both the API to exchange data.
Example:
- a SIL3 task (T1) implements a fail-safe application protocol.
- a SIL0 task (T2) implements the TCP/IP stack, used as transport layer of the application protocol.
Of course, T1 and T2 have to exchange data in both direction.
You need a third task (T3), at least SIL3, that provides the inter-task communication API (e.g. some queues management functions). In this way either T1 than T2 call only the functions of T3 (that is SIL3) to exchange data.
A typical example of this kind of mechanism is the so called "blackboard", used in avionics application.

How to decompose a system into modules?

The effectiveness of a "modularization" is dependent upon the criteria used in dividing the system into modules.
What I want is, suggest some criteria which can be used in decomposing a system into modules.
Cohesion: the functionality in a module is related.
Low coupling: you have minimum dependencies between modules.
Coordinated lifecycle: changes to functionality within a module tends to occur at the same time. Usually a consequence of high cohesion.
I think the Single Responsibility Principle would be a good guide. Try to define responsibilities for each modules, and make each module be responsible for its own thing.
See http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
Interesting reading too: http://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdf
It is a very old question.
Module is a work assignment to a programmer or group of programmers. This also is the unit of change.
Coupling and cohesion are metrics for estimation quality of relationships between modules but they are not useful for decomposition.
The decomposition should be made using "information hiding" as a criterion.
Introduction with examples and description of process based on "information hiding" principle: http://www.sqrl.ul.ie/Downloads/Lecture2.pdf.
State of art of this question is the software product line topic.

Resources