Formal verification of timing requirements - time

I am aware of different formal verification tools for verifying properties of programs (The SPIN model checker for example). Are there are any common tools/methodologies for verifying timing requirements in real-time embedded systems? For example: "This algorithm must always terminate within 50 ms". How is this type of verification typically done?

Rate Monotonic Analysis can be useful in determining if a system is scheduleable given the activation latencies and deadlines of all the tasks. There are packages available to do the number crunching for you, but I think the math involved is not beyond a spreadsheet.
Beyond that the type of technical requirement you mentioned can be difficult to verify. Even if you have the visibility to measure the amount of time the algorithm is active, it's often impossible to test all possible scenarios to verify that the deadline is never exceeded.
What I have seen done in critical applications like pacemakers and avionics is to design the algorithm such that it will not exceed the required deadline. This can be done by either limiting the amount of data it can process in one activation, or having the function time itself and terminate early (and returning an error) if it exceeds the deadline. I hope this helps.

There are tool(s) UPPAAL, IF toolset for doing the model checking of timed systems. Its based on the theory of timed automata.

Related

Why doesn't Fuchsia restrict access to clocks to prevent untrusted processes from performing timing attacks?

A timing attack is when hostile code figures out some information its not supposed to have by measuring how long it takes other, more trusted processes, to perform known actions over private data.
Advocates of the object-capability model generally recognize that you can reduce timing attacks by not providing clocks by default, requiring a process to have been given a clock capability, otherwise denying them any way of measuring the passage of time. Given that Fuchsia is very object-capability type of OS, why are clocks available by default?
This is a fair question, and this is an indirect answer as it does not answer why clock access is available.
Specifically when it comes to timing attacks, a clock is merely a convenience, it is not a necessity for recording timing information in most scenarios, as other mechanisms such as counting competing spinning operations or comparing timing to other operations are often sufficient, albeit sometimes harder to setup.
Other practical issues arise, such as it being desirable on the path you describe to also restrict threads, which also have some deep ties to this problem space by way of commonly used standard library mutexes having dependencies on wall time. The implications here become somewhat problematic as attempting to pull all such access out introduces impediments to startup of very common runtimes and, depending on implemention details many designs may also impede performance of common programs.
--
Edit: I should also add, we have mailinglists, here: fuchsia.dev/fuchsia-src/contribute/community/get-involved This question would be welcome there, and it is possible that so would a proposal to introduce some capabilities around this. The best place to start the the discussion is on those mailinglists. I hope we see you there!

Can a Parallel Processing Efficiency become > 1?

I read about efficiency in parallel computing, but never got an clear idea about it, also I read about achieving efficiency >1 and conclude that it's possible when it's a super linear.
Is that correct and possible?
If yes, then can anybody tell me how and provide an example for that?
Or, if it is not, then why?
Let's agree on a few terms first:
A set of processes may get scheduled for execution under several different strategies --
[SERIAL] - i.e. execute one after another has finished, till all are done, or
[PARALLEL] - i.e. all-start at once, all-execute at once, all-terminate at once
or
in a "just"-[CONCURRENT] fashion - i.e. some start at once, as resources permit, others are scheduled for [CONCURRENT] execution whenever free or new resources permit. The processing gets finished progressively, but without any coordination, just as resources-mapping and priorities permit.
Next,let's define a measure, how to compare a processing efficiency, right?
Given an efficiency may be related to power-consumption or to processing-time, let's focus on processing-time, ok?
Gene Amdahl has elaborated domain of generic processing speedups, from which we will borrow here. The common issue in HPC / computer science education is that lecturers do not emphasise the real-world costs of organising the parallel-processing. For this reason, the overhead-naive ( original ) formulation of the Amdahl's Law ought be always used in an overhead-strict re-formulation, because otherwise any naive-form figures are in parallel-computing
just comparing apples to oranges.
On the other hand, once both the process-add-on-setup-overhead costs and process-termination-add-on-overhead costs are recorded into the scheme, the overhead-strict speedup comparison starts to make sense to speak about processing-time efficiency.
Having said this, there are cases, when processing-time efficiency can become > 1, while it is fair to say, that a professional due care has to be taken and that not all processing-types permit to gain any remarkable speedup on whatever large-scale pool of code-execution resources, right due to the obligation to pay and cover the add-on costs of the NUMA / distributed-processing overheads.
May like to read further and experiment
with an overhead-strict Amdahl Law re-formulation [PARALLEL]-processing speedups GUI-interactive-tool cited here
Being in Sweden, your must-read is the Andreas Olofsson's personal story about his remarkable effort and experience with piloting parallel-hardware with many first-ever-s on the way from Kickstarter to a DARPA-acquired [PARALLEL]-hardware know-how.

How do you mitigate proposal-number overflow attacks in Byzantine Paxos?

I've been doing a lot of research into Paxos recently, and one thing I've always wondered about, I'm not seeing any answers to, which means I have to ask.
Paxos includes an increasing proposal number (and possibly also a separate round number, depending on who wrote the paper you're reading). And of course, two would-be leaders can get into duels where each tries to out-increment the other in a vicious cycle. But as I'm working in a Byzantine, P2P environment, it makes me what to do about proposers that would attempt to set the proposal number extremely high - for example, the maximum 32-bit or 64-bit word.
How should a language-agnostic, platform-agnostic Paxos-based protocol deal with integer maximums for proposal number and/or round number? Especially intentional/malicious cases, which make the modular-arithmetic approach of overflowing back to 0 a bit unattractive?
From what I've read, I think this is still an open question that isn't addressed in literature.
Byzantine Proposer Fast Paxos addresses denial of service, but only of the sort that would delay message sending through attacks not related to flooding with incrementing (proposal) counters.
Having said that, integer overflow is probably the least of your problems. Instead of thinking about integer overflow, you might want to consider membership attacks first (via DoS). Learning about membership after consensus from several nodes may be a viable strategy, but probably still vulnerable to Sybil attacks at some level.
Another strategy may be to incorporate some proof-of-work system for proposals to limit the flood of requests. However, it's difficult to know what to use this as a metric to balance against (for example, free currency when you mine the block chain in Bitcoin). It really depends on what type of system you're trying to build. You should consider the value of information in your system, then create a proof of work system that requires slightly more cost to circumvent.
However, once you have the ability to slow down a proposal counter, you still need to worry about integer maximums in any system with a high number of (valid) operations. You should have a strategy for number wrapping or a multiple precision scheme in place where you can clearly determine how many years/decades your network can run without encountering trouble without blowing out a fixed precision counter. If you can determine that your system will run for 100 years (or whatever) without blowing out your fixed precision counter, even with malicious entities, then you can choose to simplify things.
On another (important) note, the system model used in most papers doesn't reflect everything that makes a real-life implementation practical (Raft is a nice exception to this). If anything, some authors are guilty of creating a system model that is designed to avoid a hard problem that they haven't found an answer to. So, if someone says that X will solve everything, please be aware they they only mean that it solves everything in the very specific system model that they defined. On the other side of this, you should consider that the system model is closely tied to a statement that says "Y is impossible". A nice example to explain this concept is the completely asynchronous message passing of the Ben-Or consensus algorithm which uses nondeterminism in the system model's state machine to avoid the limits specified by the FLP impossibility result (which specifies that consensus requires partially asynchronous message passing when the system model's state machine is deterministic).
So, you should continue to consider the "impossible" after you read a proof that says it can't be done. Nancy Lynch did a nice writeup on this concept.
I guess what I'm really saying is that a good solution to your question doesn't really exist yet. If you figure it out, please publish it (or let me know if you find an existing paper).

How do programmers test their algorithm in TopCoder or other competitions?

Good programmers who write programs of moderate to higher difficulty in competitions of TopCoder or ACM ICPC, have to ensure the correctness of their algorithm before submission.
Although they are provided with some sample test cases to ensure the correct output, but how does it guarantees that program will behave correctly? They can write some test cases of their own but it won't be possible in all cases to know the correct answer through manual calculation. How do they do it?
Update: As it seems, it is not quite possible to analyze and guarantee the outcome of an algorithm given tight constraints of a competitive environment. However, if there are any manual, more common traits which are adopted while solving such problems - should be enough to answer the question. Something like best practices..
In competitions, the top programmers have enough experience to read the question, and think of some test cases that should catch most of the possibilities for input.
It catches most of the bugs usually - but it is NOT 100% safe.
However, in real life critical applications (critical systems on air planes or nuclear reactors for example) there are methods to PROVE some piece of code does what it is supposed to do.
This is the field of formal verification - which is way too complex and time consuming to be done during a contest, but for some systems it is used because mistakes could not be tolerated.
Some additional information:
Formal verification basically consists of 2 parts:
Manual verification - in here we use proving systems such as Hoare logic and manually prove the program does what we wants it to do.
Automatic model checking - modeling the problem as state machine, and use Model Checking tools to verify that the module does what it is supposed to do (or not doing something "bad").
Specifying "what it should do" is usually done with temporal logic.
This is often used to verify correctness of hardware models as well. For example Intel uses it to ensure they won't get the floating point bug again.
Picture this, imagine you are a top programmer.Meaning you know a bunch of algorithms and wouldn't think think twice while implementing them.You know how to modify an already known algorithm to suit the problem's needs.You are strong with estimating time and complexity and you expect that in the worst case your tailored algorithm would run within time and memory constraints.
At this level you simply think and use a scratchpad for about five to ten minutes and have a super clear algorithm before you start to code.Once you finish coding, you hit compile and there is usually no compilation error.Because the code is so intuitive to you.
Then based on the algorithm used and data structures used, you expect that there might be
one of the following issues.
a corner case
an overflow problem
A corner case is basically like you have coded for the general case, however when say N=1, the answer is different from others.So you generally write it as a special case.
An overflow is when intermediate values or results overflow a data type's limits.
You make note of any problems which arise at this point, and use this data during Challenge phase(as in TopCoder).
Once you have checked against these two, you hit Submit.
There's a time element to Top Coder, so it's not possible to test every combination within that constraint. They probably do the best they can and rely on experience for the rest, just as one does in real life. I don't know that it's ever possible to guarantee that a significant piece of code is error free forever.

What are the faster Paxos-related algorithms for consensus in distributed systems?

I've read Lamport's paper on Paxos. I've also heard that it isn't used much in practice, for reasons of performance. What algorithms are commonly used for consensus in distributed systems?
Not sure if this is helpful (since this is not from actual production information), but in our "distributed systems" course we've studied, along with Paxos, the Chandra-Toueg and Mostefaoui-Raynal algorithms (of the latter our professor was especially fond).
Check out the Raft algorithm for a consensus algorithm that is optimized for ease of understanding and clarity of implementation. Oh... it is pretty fast as well.
https://ramcloud.stanford.edu/wiki/display/logcabin/LogCabin
https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf
If performance is an issue, consider whether you need all of the strong consistency guarantees Paxos gives you. See e.g. http://queue.acm.org/detail.cfm?id=1466448 and http://incubator.apache.org/cassandra/. Searching on Paxos optimised gets me hits, but I suspect that relaxing some of the requirements will buy you more than tuning the protocol.
The Paxos system I run (which supports really, really big web sites) is halfway in-between Basic-Paxos Multi-paxos. I plan on moving it to a full Multi-Paxos implementation.
Paxos isn't that great as a high-throughput data storage system, but it excels in supporting those systems by providing leader election. For example, say you have a replicated data store where you want a single master for performance reasons. Your data store nodes will use the Paxos system to choose the master.
Like Google Chubby, my system is run as a service and can also store data as configuration container. (I use configuration loosely; I hear Google uses Chubby for DNS.) This data doesn't change as often as user input so it doesn't need high throughput write SLAs. Reading, on the other hand, is extremely quick because it is fully replicated and you can read from any node.
Update
Since writing this, I have upgraded my Paxos system. I am now using a chain-consensus protocol as the primary consensus system. The chain system still utilizes Basic-Paxos for re-configuration—including notifying chain nodes when the chain membership changes.
Paxos is optimal in terms of performance of consensus protocols, at least in terms of the number of network delays (which is often the dominating factor). It's clearly not possible to reliably achieve consensus while tolerating up to f failures without a single round-trip communication to at least (f-1) other nodes in between a client request and the corresponding confirmation, and Paxos achieves this lower bound. This gives a hard bound on the latency of each request to a consensus-based protocol regardless of implementation. In particular, Raft, Zab, Viewstamped Replication and all other variants on consensus protocols all have the same performance constraint.
One thing that can be improved from standard Paxos (also Raft, Zab, ...) is that there is a distinguished leader which ends up doing more than its fair share of the work and may therefore end up being a bit of a bottleneck. There is a protocol known as Egalitarian Paxos which spreads the load out across multiple leaders, although it's mindbendingly complicated IMO, is only applicable to certain domains, and still must obey the lower bound on the number of round-trips within each request. See the paper "There Is More Consensus in Egalitarian Parliaments" by Moraru et al for more details.
When you hear that Paxos is rarely used due to its poor performance, it is frequently meant that consensus itself is rarely used due to poor performance, and this is a fair criticism: it is possible to achieve much higher performance if you can avoid the need for consensus-based coordination between nodes as much as possible, because this allows for horizontal scalability.
Snarkily, it's also possible to achieve better performance by claiming to be using a proper consensus protocol but actually doing something that fails in some cases. Aphyr's blog is littered with examples of these failures not being as rare as you might like, where database implementations have either introduced bugs into good consensus-like protocols by way of "optimisation", or else developed custom consensus-like protocols that fail to be fully correct in some subtle fashion. This stuff is hard.
You should check the Apache Zookeeper project. It is used in production by Yahoo! and Facebook among others.
http://hadoop.apache.org/zookeeper/
If you look for academic papers describing it, it is described in a paper at usenix ATC'10. The consensus protocol (a variant of Paxos) is described in a paper at DSN'11.
Google documented how they did fast paxos for their megastore in the following paper: Link.
With Multi-Paxos when the leader is galloping it can respond to the client write when it has heard that the majority of nodes have written the value to disk. This is as good and efficient as you can get to maintain the consistency guarantees that Paxos makes.
Typically though people use something paxos-like such as zookeeper as an external service (dedicated cluster) to keep critical information consistent (who has locked what, who is leader, who is in a cluster, what's the configuration of the cluster) then run a less strict algorithm with less consistency guarantees which relies upon application specifics (eg vector clocks and merged siblings). The short ebook distributed systems for fun and profit as a good overview of the alternatives.
Note that lots of databases compete on speed by using risky defaults which risk consistency and can loose data under network partitions. The Aphry blog series on Jepson shows whether well know opensouce systems loose data. One cannot cheat the CAP Theorem; if you configure systems for safety then they end up doing about the same messaging and same disk writes as paxos. So really you cannot say Paxos is slow you have to say "a part of a system which needs consistency under network partitions requires a minimum number of messages and disk flushes per operation and that is slow".
There are two general blockchain consensus systems:
Those that produce unambiguous 100% finality given a defined set of
validators
Those which do not provide 100% finality but instead
rely on high probability of finality.
The first generation blockchain consensus algorithms (Proof of Work, Proof of Stake, and BitShares’ Delegated Proof of Stake) only offer high probability of finality that grows with time. In theory someone could pay enough money to mine an alternative “longer” Bitcoin blockchain that goes all the way back to genesis.
More recent consensus algorithms, whether HashGraph, Casper, Tendermint, or DPOS BFT all adopt long-established principles of Paxos and related consensus algorithms. Under these models it is possible to reach unambiguous finality under all network conditions so long as more than ⅔ of participants are honest.
Objective and unambiguous 100% finality is a critical property for all blockchains that wish to support inter-blockchain communication. Absent 100% finality, a reversion on one chain could have irreconcilable ripple effects across all interconnected chains.
The abstract protocol for these more recent protocols involves:
Propose block
All participants acknowledge block (pre-commitment)
All participants acknowledge when ⅔+ have sent them pre-commitments
(commitment)
A block is final once a node has received ⅔+ commitments
Unanimous agreement on finality is guaranteed unless ⅓+
are bad and evidence of bad behavior is available to all
It is the technical differences in the protocols that give rise to real-world impact on user experience. This includes things such as latency until finality, degrees of finality, bandwidth, and proof generation / validation overhead.
Look for more details on delegated proof of stake by eos here
Raft is more understandable, and faster alternative of Paxos. One of the most popular distributed systems which uses Raft is Etcd. Etcd is the distributed store used in Kubernetes.
It's equivalent to Paxos in fault-tolerance.

Resources