I am writing a documentation for my Software engineering subject. My project is on a Hospital Managements System. Here is the question that is making me confused.
(2. Architectural design) Present the overall software architecture, stating whether it’s Layered,
Repository, Client-Server, or Pipe and Filter architecture( – skim through pages 155 to 164 of our text
reference book to see descriptions of these different architectures). Describe and present it on a standard or non-standard diagram.
So what is the difference between standard and non-standard diagram?
The question is indeed confusing, since it presents architectural models as if they were mutually exclusive (i.e. it can be at the same time layered and client-server) and relies on ambiguous terminology.
When it comes to architectural diagrams, there are standard diagrams, which follow a well known formal graphical notation. Typical examples are:
UML
Older OO notation (e.g. Booch, Rumbaugh or Objectory - it's really old because these have been merged together to make UML).
Non OO notations, such for example the IDEF suite (which was enriched in the meantime with an OO layer), SADT, Gane & Sarson (it's also quite old, less and less used, except in some niche markets).
Among those, the only which qualifies officially and unambiguously as a standard is UML: it's the only one that is recognized by an international standard setting body (ISO/IEC 19505).
But in architecture you have also a fair bunch of non-standard diagrams that convey graphically the structural intent. Typically, a layered arrangement of services, or an hexagonal or a concentric presentations are frequently used. Sometimes it's even more visual with clients shown as PC, and several servers in the network. All these use non-standard notations.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 months ago.
Improve this question
I always wondered how does a project/team/company selects or qualifies for choosing a specific guideline to be followed like MISRA 1998/2004/2012? How should one know and decide that which guideline will be sufficient (cost vs time vs quality) for qualifying a project?
(I know his question is a bit blunt, but any answers will be appreciated)
This is often a requirement from the area of application. You'll have safety standards dictating that a safe subset of the programming language is used. This requirement can come from the "SIL" standards such as industry IEC 61508, automotive IEC 26262, aerospace DO-178 and so on. In such mission critical systems, you may not have any other choice than to use MISRA-C.
But MISRA-C is also becoming industry standard for all embedded systems, no matter their nature. The reason why is obvious: nobody, no matter area of application, likes bad, low-quality software full of bugs.
Introducing MISRA-C, together with company coding guidelines, style rules, static analysis, version control... all of it will improve the quality of the end product significantly. And it will force the programmers to educate themselves about C, while becoming more professional overall.
That being said, MISRA-C is not necessarily the most suitable set of rules for every company. It is mostly applicable to embedded systems. For other kinds of applications, something like CERT-C might be more relevant. It is convenient to use one of these well-known standards, because then you can automate tests with static analysers.
They key here is that every semi-professional company that produces software needs some sort of coding rules that focus on banning bad practice. Some companies tend to focus way too much on mostly unimportant style details like where to place {, when they should focus on things that truly improves software quality, like "we should avoid writing function-like macros".
The coding rules should be integrated with development routines. It doesn't make much sense to implement MISRA-C for one single project. There is quite a lot of work involved in getting it up and running.
What is very important is that you have at least one, preferably several C veterans with many years of experience, that you put in charge of the coding rules. They need to read every dirty detail of the MISRA document and decide which rules that make sense for the company to implement. Some of the rules are far from trivial to understand. Not all of them make sense in every situation. If your dev team consists solely of beginners or intermediately experiences programmers and you decide to follow MISRA to the letter, it will end badly. You need at least one veteran programmer with enough knowledge to make the calls about which rules to follow and which to deviate from.
As for which version of MISRA-C to pick, always use the latest one: 2012. It has been cleaned up quite a bit and some weird rules have been removed. It also supports C99, unlike the older ones.
Code shall be written such that it has certain desired properties (quality criteria). Some quality criteria are practically always desired, like correctness, readability, maintainability (even here are exeptions, for example if you code for the obfuscated c contest or underhanded c contest). Other quality criteria are only relevant for certain projects or organizations, for example portability to 16 bit machines.
A well formulated coding rule typically supports one or several quality criteria. It can, however, be in conflict with others. There is a large number of established coding rules that support the typically desired quality criteria, but without significant negative impact on others. Many of these have been identified quite a long time ago (Kernighan and Plauger: Elements of Programming Style, 1974, and, yes, I have a copy of it :-). At times additional good rules are identified.
Unless in very rare circumstances (like, intentional obfuscation) code should follow these "established good rules", irrelevant of whether they are part of MISRA-XXXX or not. And, if a new good rule is found, ensure people start following it. You may even decide to apply this new good rule to already existing code, but that is more of a management decision because of the risk involved.
It simply does not make sense not to follow a good rule just because it is not part of MISRA-XXXX. Similarly, it does not make sense to follow a bad rule just because it is part of MISRA-XXXX. (In MISRA-C-2004 it is said they have removed 16 rules from MISRA-1998 because "they did not make sense" - hopefully some developers have noticed and did not apply them. And, as Lundin mentions, in MISRA-2012 again some "weird rules" have been removed).
However, be aware that, for almost every rule, its thoughtless application can in certain situations even degrade those quality criteria which it normally supports.
In addition to those generally applicable rules, there are rules that are only relevant if there are specific quality criteria for your code. What complicates things is that, in most projects, for different parts of the code the various quality criteria are of different relevance.
For an interrupt service routine performance is more critical than for most other parts of the software. Therefore, compromises wrt. other quality criteria will be made.
Some software parts are designed to be portable between environments, but often by introducing adapters/wrappers which are then specific for a certain environment. Thus, the adapter itself is not even intended to be portable.
Except for the abovementioned "established good rules", a fixed set of coding rules consequently can not work well for all parts of a typical software. So, for each part of the software first identify which quality criteria are relevant for that part. Then, apply those patterns, principles and rules which really support those specific quality criteria appropriately.
But, what about MISRA? The MISRA rules are intended to support quality criteria like correctness via analysability (e.g. ban of recursion and dynamic memory management), portability (e.g. isolation of assembly code). That is, for software parts where these specific quality criteria are not relevant, the respective MISRA rules do not bring any benefit. Unfortunately, MISRA has no notion of software architecture, where different parts have different quality criteria.
While many rules have improved over the MISRA releases, there are still many rules that are stricter than necessary (e.g. "no /* within // comments and vice versa" - why?) and rules that try to avoid (unlikely) mistakes in ways that are likely to introduce problems rather than solving them (e.g. "no re-use of any name, not even in different local scopes"). Here's my tip for you: If you a) really want your rule-checker to find all bugs and b) don't mind getting an absurd amount of warnings with a high false-positive ratio, this one rule/checker does it all for you: "warning: line of code detected: <line of code>".
Finally, MISRA is developed under the assumption that C (in particular its arithmetics) are too hard to understand. MISRA developes its own arithmetics on top of C in the belief that developers instead should learn the MISRA arithmetics and then can get away without understanding C arithmetics. Apparently, this has not been successfull, because the MISRA-2004 model ("underlying type") was replaced by another one ("essential type") in MISRA-2012. Thus, if your team understands C well and uses a good static analysis tool (or even several) which is capable of identifying the problematic scenarios with C's arithmetics, the MISRA approach is likely not for you.
TL;DR: Follow standard best practices, apply established good rules. Identify specific quality criteria for the different parts of your architecture. For each part, apply patterns, principles and rules appropriate for that part. Understand rules before applying them. Don't apply stupid rules. Use good static code analysis. Be sure your team understands C.
Just as with your compiler suite or development environment, and/or your static analysis tools, the appropriate version should be decided at the on-set of a project, as it may be difficult to change later!
If you are starting a project from scratch, then I would recommend that MISRA C:2012 is adopted - while also adopting the guidance of (published April 2016, revised February 2020) MISRA Compliance
The new Compliance guidance also offers advice on how to treat Adopted Code (ie pre-existing or imported code).
Existing projects that have been developed to earlier versions should continue to use that earlier version... unless there is a good business case for changing. Changing WILL result in some rework!
I've found the definition of model in UML reference manual (chapter 2) and I can't get what the authors mean by the following sentence:
The semantic modeling elements are used for code generation, validity
checking, complexity metrics.
How can the semantic aspect of UML model be used in
1) code generation,
2) validity checking and
3) complexity metrics
I hope I can find someone help me understand it through simple example
If you are referring to book "The Unified Modeling Language Reference Manual" by James Rumbaugh, Ivar Jacobson, Grady Booch, Copyright © 1999 by Addison Wesley Longman, Inc. then the part you are quoting from chapter 2 starts with
What Is in a Model?
Semantics and presentation. Models have two major aspects: semantic information (semantics) and visual presentation (notation).
The semantic aspect captures the meaning of an application as a network of logical constructs, such as classes, associations, states, use cases, and messages. Semantic model elements carry the meaning of the model - that is, they convey the semantics. The semantic modeling elements are used for code generation, validity checking, complexity metrics, and so on. The visual appearance is irrelevant to most tools that process models...
So there is an invisible semantic model, database of things, usually represented as file in XML Metadata Interchange (XMI) format. This database is used for generation of code and everything. Many modeling tools support import/export of models in this format. At the time the book was written XMI even did not support importing/exporting diagrams (pictures, the visual presentation), only the semantic part of the models.
On the other hand there are "only pictures" shown as UML diagrams that display various aspects of the model. But they are only pictures to be read by humans with no value for machines.
See e.g. this https://stackoverflow.com/a/23308423/2626313 for an example. The right-lower corner shows what is in the model, the other things are "only pictures"
EDIT: for hands-on experience evaluate some tool that can do the code generation and take a look at what it does.
Professional tool that can do/explain a lot is Sparx Systems Enterprise Architect with some community of users available here on Stack Overflow.
An easy to get started way using some more light-weight tool:
open the link in Stack Overflow answer https://stackoverflow.com/a/22694685/2626313 leading to the GenMyModel website
in the GenMyModel application click "Tools → Direct Generation → Java". Download and explore the generated code
For many programming languages there are style guides available,
e.g. PEP8 for Python, this Matlab style guide or the style guides by Google.
For Modelica I found the conventions described in the Users Guide,
but is there something more comprehensive available?
And, ideally, a tool that helps with the re-formatting, indentation etc.?
The guidelines in the Modelica User's Guide are the only ones I am aware of. The topic has been discussed several times at the design meetings and I've written one paper that discussed the topic but didn't really propose concrete guidelines.
Part of the issue is that while the Modelica Association might have their guidelines (as your've seen), they don't represent any particular business or industries guidelines which might be different. In other words, I could envision having many different guidelines floating around that are tailored to specific types of models or specific industry conventions. But the Modelica ones are the only ones I am specifically aware of (although it would not surprise me if large organizations using did have their own formal style guidelines).
For programming FPGAS, is it possible to write my own place & route routines? [The point is not that mine would be better; the point is whether I have the freedom to do so] -- or does the place & route stage output into undocumented bitfiles, essengially forcing me to use proprietary tools?
Thanks!
There's been some discussion of this on comp.arch.fpga in the past. The conclusion is generally that unless you want to attract intense legal action from the FPGA companies then you probably don't want to do something like this. bitfile formats are closely guarded secrets of the FPGA companies and you would likely have to understand the file format in order to do what you want to do. That implies that you would need to reverse engineer the format and that (if you made your tool public in any way) would get you a lawsuit in short order.
I will add that there probably are intermediate files and that you likely wouldn't read or write the bitfile itself to do what you want to do, but those intermediate files tend to be undocumented as well. Read the EULA for your FPGA synthesis tool (ISE from Xilinx, for example) - any kind of reverse engineering is strictly forbidden. It seems that the only way we'll ever have open source alternatives in this space is for an open source FPGA architecture to emerge.
I agree with annccodeal, but to amplify a little bit, on Xilinx, there may be a few ways to do this. The XDL file format allows (or used to allow) explicit placement and routing. In addition, it should be possible to script the FPGA Editor to implement custom routing.
As regards placement, there is a rich infrastructure to constrain technology mapping of logic to primitives and to control placement of those primitives. For example LUT_MAP constraints can control technology mapping and LOC and RLOC constraints can determine placement. In practice, these allow the experienced designer great control over how a design is implemented without requiring them to duplicate man-centuries of software development to generate a bitstream directly.
You may also find interesting the current state of the art FPGA CAD research software such VPR. In my opinion these are challenged to keep up with vendor's own tools that must cope with modern heterogeneous FPGAs with splittable 6-LUTs, DSP blocks, etc.
Happy hacking.
I am brainstorming an idea of developing a high level software to manipulate matrix algebra equations, tensor manipulations to be exact, to produce optimized C++ code using several criteria such as sizes of dimensions, available memory on the system, etc.
Something which is similar in spirit to tensor contraction engine, TCE, but specifically oriented towards producing optimized rather than general code.
The end result desired is software which is expert in producing parallel program in my domain.
Does this sort of development fall on the category of expert systems?
What other projects out there work in the same area of producing code given the constraints?
What you are describing is more like a Domain-Specific Language.
http://en.wikipedia.org/wiki/Domain-specific_language
It wouldn't be called an expert system, at least not in the traditional sense of this concept.
Expert systems are rule-based inference engines, whereby the expertise in question is clearly encapsulated in the rules. The system you suggest, while possibly encapsulating insight about the nature of the problem domain inside a linear algebra model of sorts, would act more as a black box than an expert system. One of the characteristics of expert systems is that they can produce an "explanation" of their reasoning, and such a feature is possible in part because the knowledge representation, while formalized, remains close to simple statements in a natural language; matrices and operations on them, while possibly being derived upon similar observation of reality, are a lot less transparent...
It is unclear from the description in the question if the system you propose would optimize existing code (possibly in a limited domain), or if it would produced optimized code, in that case driven bay some external goal/function...
Well production systems (rule systems) are one of four general approaches to computation (Turing machines, Church recursive functions, Post production systems and Markov algorithms [and several more have been added to that list]) which more or less have these respective realizations: imperative programming, functional programming, rule based programming - as far as I know Markov algorithms don't have an independent implementation. These are all Turing equivalent.
So rule based programming can be used to write anything at all. Also early mathematical/symbolic manipulation programs did generally use rule based programming until the problem was sufficiently well understood (whereupon the approach was changed to imperative or constraint programming - see MACSYMA - hmmm MACSYMA was written in Lisp so perhaps I have a different program in mind or perhaps they originally implemented a rule system in Lisp for this).
You could easily write a rule system to perform the matrix manipulations. You could keep a trace depending on logical support to record the actual rules fired that contributed to a solution (some rules that fire might not contribute directly to a solution afterall). Then for every rule you have a mapping to a set of C++ instructions (these don't have to be "complete" - they sort of act more like a semi-executable requirement) which are output as an intermediate language. Then that is read by a parser to link it to the required input data and any kind of fix up needed. You might find it easier to generate functional code - for one thing after the fix up you could more easily optimize the output code in functional source.
Having said that, other contributors have outlined a domain specific language approach and that is what the TED people did too (my suggestion is that too just using rules).