Probabilistic logic vs. analog - logic

There are research articles (e.g. Chakrapani & Palem) and devices (e.g. Lyric) that use a so-called probabilistic logic. I suppose the idea is that the outputs of such a device, given some inputs, will converge to some probability distribution. What is the difference between these devices and those using analog signals? That is, are these devices still considered digital, analog, mixed-signal?

This paper seems to describe some novel kind of (probabilistic) boolean logic, and it is not about implementation. I only skimmed through the paper, but it seems to be just another one of those theories. There is, by the way, a simple reason why probabilistic logics don't give you what classical logics give you, namely, they are not truth functional (i.e. the value of A & B does not depend solely on the value of A and the value of B).
As for implementing such a thing on a chip: I think both are possible. If you do it digitally, then you're calculating probabilities, and you can just as well run some code on a CPU. I don't really know about analog implementations, but I guess any elementary analog component (transistor, opamp etc) can be seen as performing some kind of basic arithmetic operation on voltages and currents. Whether a circuit gives outputs that adhere to, or approximate, the Kolmogorov laws of probability, that's another question, but my guess is: it is somehow possible and maybe it has been done.

Related

Are muxes more "expensive" than other logic?

This is mostly out of curiosity.
One fragment from some VHDL code that I've been working on recently resembles the following:
led_q <= (pwm_d and ch_ena) when pwm_ena = '1' else ch_ena;
This is a mux-style expression, of course. But it's also equivalent to the following basic logic expression (at least when ignoring non-binary states):
led_q <= ch_ena and (pwm_d or not pwm_ena);
Is one "better" than the other in terms of logic utilisation or efficiency when actually implemented in an FPGA? Is it preferable to use one over the other, or is the compiler smart enough to pick the "best" on its own?
(For the curious, the purpose of the expression is to define the state of an LED -- if ch_ena is false it should always be off as the channel is disabled, otherwise it should either be on solidly or flashing according to pwm_d, according to pwm_ena (PWM enable). I think the first form describes this more obviously than the second, although it's not too hard to realise how the second behaves.)
For a simple logical expression, like the one shown, where the synthesis tool can easily create a complete truth table, the expression is likely to be converted to an internal truth table, which is then directly mapped to the available FPGA LUT resources. Since the truth table is identical for the two equivalent expressions, the hardware will also be the same.
However, for complex expressions where a complete truth table can't be generated, e.g. when using arithmetic operations, and/or where dedicated resources are available, the synthesis tool may choose to hold an internal representation that is more closely related to the original VHDL code, and in this case the VHDL coding style can have a great impact on the resulting logic, even for equivalent expressions.
In the end, the implementation is tool specific, so the best way to find out what logic is generated is to try it with the specific tool, in special for large or timing critical parts of the design, where the implementation is critical.
In general it depends on the target architecture. For Xilinx FPGAs the logic is mostly mapped into LUTs with sporadic use of the hard logic resources where the mapper can make use of them. Every possible LUT configuration has essentially equal performance so there's little benefit to scrutinizing the mapper's work unless you're really pushing the speed limits of the device where you'd be forced into manually instantiating hand-mapped LUTs.
Non-LUT based architectures like the Actel/Microsemi device families use 2-input muxes as the main logic primitive and everything is mapped down to them. You can't generalize what is best across all types of FPGAs and CPLDs but nowadays you can mostly trust that the mapper will do a decent enough job using timing constraints to push it toward the results you need.
With regards to the question I think it is best to avoid obscure Boolean expressions where possible. They tend to be hard to decipher months later when you forgot what you meant them to do. I would lean toward the when-else simply from a code maintenance point of view. Even for this trivial example you have to think closely about what behavior it describes whereas the when-else describes the intended behavior directly in human level syntax.
HDLs work best when you use the highest abstraction possible and avoid wallowing around with low-level bit twiddling. This is a place where VHDL truly shines if you leverage the more advanced features of the language and move away from describing raw logic everywhere. Let the synthesizer do the work. Introductory learning materials focus on the low level structural gate descriptions and logic expressions because that is easiest for beginners to get a start on but it is not the best way to use VHDL for complex designs in the long run.
Of course there are situations where Booleans are better, particularly when doing bitwise operations across vectors in parallel which requires messy loops to do the same imperatively. It all depends on the context.

True random number generator using VHDL

I'm asked to design a true random generator using VHDL.With lot of struggle I could only design a PRNGs not TRNG. Is it possible to generate number perfectly random??? Please suggest me in this. I'm really clueless!
There is NO such thing as a "true" random number generator. This is one of my favorite pseudo-random generators however, and would be fun to implement in VHDL.
http://en.wikipedia.org/wiki/Xorshift
Also, see this: http://en.wikipedia.org/wiki/Random_number_generation#.22True.22_random_numbers_vs._pseudorandom_numbers
The only thing that I can think of to get you "better" randomness would be to do something like write a file and then read a file. The scheduler on the host PC might have enough entropy associated with it to cause some variance in the time it takes for these operations and you could use that time as a key to seed your algorithm.
Since you are asking about VHDL, you want to design special-purpose hardware. Now if you operate hardware in a way which should never be done for digital logic, you might get some kind of "true" random behavior.
If, e.g., you design a circuit with a D-type flip-flop that is clocked when its data input changes its level, the output becomes metastable, i.e. is some time undefined (between 0 and 1), before it becomes stable as 0 or 1 again. How long this takes, depends among others on the electric noise, e.g. is random. I could imagine that you can use such effects to make a random generator.
Contrary to the claims of most of the other answers, there are several TRNG designs for FPGAs mostly based on ring oscillators or self-timed rings, see e.g.
B. Yang, "True Random Number Generators for FPGAs," PhD thesis, KU Leuven, N. Mentens, and I. Verbauwhede (promotors), 2018.
and
VHDL TRNG designs
thank u all for ur replies.I'm thinking to use a register holding different values and take it each time the repetition starts. the idea is to provide different seed values so I can get random values. Since I'm new to VHDL coding, Im not sure if this works but just a try from my side if I can do like this. Any suggestions are welcomed on this.
You're not going to get a true random number generator out of an FPGA / VHDL. The best you can hope for is a h/w PRNG that's readable from some register somewhere.
You might choose to implement one of the PRNG algorithms out there. You're then going to have to trust the algorithm designer and then trust which ever VHDL implementation you go with (your own or one you acquire off someone else). You might start by looking at:
http://en.wikipedia.org/wiki/List_of_pseudorandom_number_generators#Cryptographic_algorithms

How do programmers test their algorithm in TopCoder or other competitions?

Good programmers who write programs of moderate to higher difficulty in competitions of TopCoder or ACM ICPC, have to ensure the correctness of their algorithm before submission.
Although they are provided with some sample test cases to ensure the correct output, but how does it guarantees that program will behave correctly? They can write some test cases of their own but it won't be possible in all cases to know the correct answer through manual calculation. How do they do it?
Update: As it seems, it is not quite possible to analyze and guarantee the outcome of an algorithm given tight constraints of a competitive environment. However, if there are any manual, more common traits which are adopted while solving such problems - should be enough to answer the question. Something like best practices..
In competitions, the top programmers have enough experience to read the question, and think of some test cases that should catch most of the possibilities for input.
It catches most of the bugs usually - but it is NOT 100% safe.
However, in real life critical applications (critical systems on air planes or nuclear reactors for example) there are methods to PROVE some piece of code does what it is supposed to do.
This is the field of formal verification - which is way too complex and time consuming to be done during a contest, but for some systems it is used because mistakes could not be tolerated.
Some additional information:
Formal verification basically consists of 2 parts:
Manual verification - in here we use proving systems such as Hoare logic and manually prove the program does what we wants it to do.
Automatic model checking - modeling the problem as state machine, and use Model Checking tools to verify that the module does what it is supposed to do (or not doing something "bad").
Specifying "what it should do" is usually done with temporal logic.
This is often used to verify correctness of hardware models as well. For example Intel uses it to ensure they won't get the floating point bug again.
Picture this, imagine you are a top programmer.Meaning you know a bunch of algorithms and wouldn't think think twice while implementing them.You know how to modify an already known algorithm to suit the problem's needs.You are strong with estimating time and complexity and you expect that in the worst case your tailored algorithm would run within time and memory constraints.
At this level you simply think and use a scratchpad for about five to ten minutes and have a super clear algorithm before you start to code.Once you finish coding, you hit compile and there is usually no compilation error.Because the code is so intuitive to you.
Then based on the algorithm used and data structures used, you expect that there might be
one of the following issues.
a corner case
an overflow problem
A corner case is basically like you have coded for the general case, however when say N=1, the answer is different from others.So you generally write it as a special case.
An overflow is when intermediate values or results overflow a data type's limits.
You make note of any problems which arise at this point, and use this data during Challenge phase(as in TopCoder).
Once you have checked against these two, you hit Submit.
There's a time element to Top Coder, so it's not possible to test every combination within that constraint. They probably do the best they can and rely on experience for the rest, just as one does in real life. I don't know that it's ever possible to guarantee that a significant piece of code is error free forever.

How to calculate indefinite integral programmatically

I remember solving a lot of indefinite integration problems. There are certain standard methods of solving them, but nevertheless there are problems which take a combination of approaches to arrive at a solution.
But how can we achieve the solution programatically.
For instance look at the online integrator app of Mathematica. So how do we approach to write such a program which accepts a function as an argument and returns the indefinite integral of the function.
PS. The input function can be assumed to be continuous(i.e. is not for instance sin(x)/x).
You have Risch's algorithm which is subtly undecidable (since you must decide whether two expressions are equal, akin to the ubiquitous halting problem), and really long to implement.
If you're into complicated stuff, solving an ordinary differential equation is actually not harder (and computing an indefinite integral is equivalent to solving y' = f(x)). There exists a Galois differential theory which mimics Galois theory for polynomial equations (but with Lie groups of symmetries of solutions instead of finite groups of permutations of roots). Risch's algorithm is based on it.
The algorithm you are looking for is Risch' Algorithm:
http://en.wikipedia.org/wiki/Risch_algorithm
I believe it is a bit tricky to use. This book:
http://www.amazon.com/Algorithms-Computer-Algebra-Keith-Geddes/dp/0792392590
has description of it. A 100 page description.
You keep a set of basic forms you know the integrals of (polynomials, elementary trigonometric functions, etc.) and you use them on the form of the input. This is doable if you don't need much generality: it's very easy to write a program that integrates polynomials, for example.
If you want to do it in the most general case possible, you'll have to do much of the work that computer algebra systems do. It is a lifetime's work for some people, e.g. if you look at Risch's "algorithm" posted in other answers, or symbolic integration, you can see that there are entire multi-volume books ("Manuel Bronstein, Symbolic Integration Volume I: Springer") that have been written on the topic, and very few existing computer algebra systems implement it in maximum generality.
If you really want to code it yourself, you can look at the source code of Sage or the several projects listed among its components. Of course, it's easier to use one of these programs, or, if you're writing something bigger, use one of these as libraries.
These expert systems usually have a huge collection of techniques and simply try one after another.
I'm not sure about WolframMath, but in Maple there's a command that enables displaying all intermediate steps. If you do so, you get as output all the tried techniques.
Edit:
Transforming the input should not be the really tricky part - you need to write a parser and a lexer, that transforms the textual input into an internal representation.
Good luck. Mathematica is very complex piece of software, and symbolic manipulation is something that it does the best. If you are interested in the topic take a look at these books:
http://www.amazon.com/Computer-Algebra-Symbolic-Computation-Elementary/dp/1568811586/ref=sr_1_3?ie=UTF8&s=books&qid=1279039619&sr=8-3-spell
Also, going to the source wouldn't hurt either. These book actually explains the inner workings of mathematica
http://www.amazon.com/Mathematica-Book-Fourth-Stephen-Wolfram/dp/0521643147/ref=sr_1_7?ie=UTF8&s=books&qid=1279039687&sr=1-7

What makes people think that NNs have more computational power than existing models?

I've read in Wikipedia that neural-network functions defined on a field of arbitrary real/rational numbers (along with algorithmic schemas, and the speculative `transrecursive' models) have more computational power than the computers we use today. Of course it was a page of russian wikipedia (ru.wikipedia.org) and that may be not properly proven, but that's not the only source of such.. rumors
Now, the thing that I really do not understand is: How can a string-rewriting machine (NNs are exactly string-rewriting machines just as Turing machines are; only programming language is different) be more powerful than a universally capable U-machine?
Yes, the descriptive instrument is really different, but the fact is that any function of such class can be (easily or not) turned to be a legal Turing-machine. Am I wrong? Do I miss something important?
What is the cause of people saying that? I do know that the fenomenum of undecidability is widely accepted today (though not consistently proven according to what I've read), but I do not really see a smallest chance of NNs being able to solve that particular problem.
Add-in: Not consistently proven according to what I've read - I meant that you might want to take a look at A. Zenkin's (russian mathematician) papers after mid-90-s where he persuasively postulates the wrongness of G. Cantor's concepts, including transfinite sets, uncountable sets, diagonalization method (method used in the proof of undecidability by Turing) and maybe others. Even Goedel's incompletness theorems were proven in right way in only 21-st century.. That's all just to plug Zenkin's work to the post cause I don't know how widespread that knowledge is in CS community so forgive me if that did look stupid.
Thank you!
From what little research I've done, most of these claims of trans-Turing systems, or of the incorrectness of Cantor's diagonalization proof, etc. are, shall we say, "controversial" in legitimate mathematical circles. Words like "crank" get thrown around frequently.
Obviously, the strong Church-Turing thesis remains unproven, but as you pointed out there's really no good reason to believe that artificial neural networks constitute computational capabilities beyond general recursion/UTMs/lambda calculus/etc.
From a theoretical viewpoint, I think you're absolutely correct -- neural networks provide very little that's new or different.
From a practical viewpoint, neural networks are simply a way of casting solutions into a form where parallel execution is natural and easy, whereas Turing machines are sequential in nature, and executing their sequences in parallel is relatively difficult. In fact, most of what's been done in CPU development over the last few decades has basically been figuring out ways to execute code in parallel while maintaining the illusion that it's executing in sequence. A lot of the hardware in a modern CPU is devoted to maintaining that illusion, and the degree to which parallel execution has become explicit is mostly an admission that maintaining the illusion has become prohibitively expensive.
Anyone who "proves" that Cantor's diagonal method doesn't work proves only their own incompetence. Cf. Wilfred Hodges' An editor recalls some hopeless papers for a surprisingly sympathetic explanation of what kind of thing is going wrong with these attempts.
You can provide speculative descriptions of hyper-Turing neural nets, just as you can provide speculative descriptions of other kinds of hyper-Turing computers: there's nothing incoherent in the idea that hypercomputation is possible, and speculative descriptions of mechanical hypercomputers have been made where the hypercomputer is stipulated to have infinitely fine engravings that encode an oracle for the Halting machine: the existence of such a machine is consistent with Newtonian mechanics, though not quantum mechanics. Rather, the Church-Turing thesis says that they can't be constructed, and there are two reasons to believe the Church-Turing thesis is correct:
No such machines have ever been constructed; and
There's work been done connecting models of physics to models of computation, going back to work in the early 1970s by Robin Gandy, with recent work by people such as David Deutsch (e.g., Machines, Logic and Quantum Physics and John Tucker (e.g., Computations via experiments with kinematic systems) which argues that physics doesn't support hypercomputation.
The main point is that the truth of the Church-Turing thesis is an empirical fact, and not a mathematical fact. It's one that we can have confidence is true, but not certainty.
From a layman's perspective, I see that
NNs can be more effective at solving some types problems than a turing machine, but they are not compuationally more powerful.
Even if NNs were provably more powerful than TMs, execution on current hardware would render them less powerful, since current hardware is only an apporximation of a TM and can only execute problems computable by a bounded TM.
You may be interested in S. Franklin and M. Garzon, Neural computability. There is a preview on Google. It discusses the computational power of neural nets and also states that it is rumored that neural nets are strictly more powerful than Turing machines.

Resources