Can quantum algorithms be useful?
Has any one been successful in putting quantum algorithms to any use?
"Quantum algorithms" are algorithms to be run on quantum computers.
There are things that can be done quickly in the quantum computation model that are not known (or believed) to be possible with classical computation: Discrete logarithm and Integer factorisation (see Shor's algorithm) are in BQP, but not believed to be in P (or BPP). Thus when/if a quantum computer is built, it is known that it can break RSA and most current cryptography.
However,
quantum computers cannot (are not believed to, I mean) solve NP-complete problems in polynomial time, and more importantly,
no one has built a quantum computer yet, and it is not even clear if it will be possible to build one -- avoiding decoherence, etc. (There have been claims of quantum computers with a limited number of qubits -- 5 to 10, but clearly they are not useful for anything much.)
"Well, there's a quantum computer that can factor 15, so those of you using
4-bit RSA should worry." -- Bruce Schneier
[There is also the idea of quantum cryptography, which is cryptography over a quantum channel, and is something quite different from quantum computation.]
The only logical answer is that they are both useful and not useful. ;-)
My understanding is that current quantum computing capabilities can be used to exchange keys securely. The exchanged keys can then be used to perform traditional cryptography.
As far i know about Quantum Computing and Algorithms.I seen pretty much usage of Quantum Algorithms in Cryptography.If you are really interested in Cryptography do please check on those things.Basically all matter is how well you know basics of Quantum mechanics and discrete mathematics. Eg: you must be seeing difficult algorithms like Shor's Algorithm ,this basically integer factorization.Basically integer factorization is easy thing using normal algorithms Algebraic-group factorization algorithm,Fermat's factorization method..etc but when its comes to Quantum Computing its totally different,you are running the things in Quantum computers so algorithm changes and we have to use the algorithms like Shor's etc.
Basically make good understanding about Quantum Computing and then look Quantum Algorithms
There is also some research into whether quantum computing can be used to solve hard problems, such as factoring large numbers (if this was feasible it would break current encryption techniques).
Stackoverflow runs on a quantum computer of sorts.
Feynman has implied the possibility that quantum probability is the source of human creativity.
Individuals in the crowd present answers and vote on them with only a probability of being correct. Only by sampling the crowd many times can the probability be raised to a confident level.
So maybe Stackoverflow examplifies a successful quantum algorithm implementation.
What do you think?
One good use of a quantum device that can be done in current technology is a random number generator.
Generating truly random bits is an important cryptographic primitive, and is used, for example, in the RSA algorithm to generate the private key. In our PC the random number generator isn't random at all in the sense that the source has no entropy in it, and therefore isn't really random at all.
Related
I am doing a lot of Metropolis-Hastings Markov chain Monte Carlo (MCMC).
Most codes I have in use, use Mersenne Twister (MT) as pseudo random number generator (PRNG).
However, I recently read, that MT is outdated and probably shouldn't be used anymore as it fails some tests and is relatively slow. So I am willing to switch.
Numpy now defaults to PCG (https://www.pcg-random.org/), which claims to be good. Other sites are rather critical. E.g. http://pcg.di.unimi.it/pcg.php.
It seems everyone praises its own work.
There is some good information already here: Pseudo-random number generator
But many answers are already a bit dated and I want to formulate my question a bit more specific.
As I said: the main use case is Metropolis-Hastings MCMC.
Therefore, I need:
uniformly distributed numbers in half-open and open intervals
around 2^50 samples, apparently per rule of thumb the PRNG should have a period of at least 2^128
sufficient quality of random numbers (whatever this might mean)
a reasonable fast PRNG (for a fixed runtime faster code means more accuracy for MCMC)
I do not need
cryptographically security
As I am by no means an expert, of course usability counts also. So I would welcome an available C++ implementation (this seems to be standard), which is sufficiently easy to use for the novice.
I don't know if this is the right place to ask such question but, ALL computers use binary, when you program the compiler always turn your code into a "01" form.
But, are there computers that do NOT work on that principle?
If yes, what kind of computers they are? What can they do and how were they created?
If no, why can't such computers be made?
Thanks and I hope this is good information for anyone looking for a similar answer.
Yes, but they are esoteric. Modern computers are digital, use base two, are electronic, and use classical systems largely due to the success of integrated digital logic circuits, but it was a long road and other types of computers have been invented along the way.
Digital computers in other bases
A handful of computers were made based on ternary (base 3) logic. Some of these used bipolar voltages to represent values in balanced ternary. The general consensus is that if you are going to use digital circuitry, binary is the best option, so this is mostly a historical curiosity.
See: https://en.wikipedia.org/wiki/Ternary_computer
Analog and mechanical computers
Analog computers use continuous voltages to represent values. Mechanical computers use the positions of physical parts, such as gear shafts, to represent values. There is some renewed interest in analog computers for problems such as image analysis, where researchers suspect that the benefits of performance outweigh the disadvantages of accumulated error. Whether analog computers see a resurgence is unknown.
See: https://en.wikipedia.org/wiki/Analog_computer
Quantum computers
Quantum computers represent values using superpositions of quantum states. Creating a large quantum computer is an open problem, and a number of algorithms have already been written for quantum computers which would outperform classical implementations. For example, Shor's algorithm.
See: https://en.wikipedia.org/wiki/Quantum_computing
Technically speaking all computers are based on a voltage of
0 = no voltage
1 = yes voltage
However, there are new "Quantum computers" that are far from complete that do not use a 0-1, instead they use quantum bits.
Here is more information on the topic:
https://en.wikipedia.org/wiki/Quantum_computing
May classical pseudo random generators be predictable by powerful quantum computers in the future or is it proven that this is not possible?
If they are predictable, do scientists know whether there exist PRGs that are unpredictable by quantum computers?
The security of a classical Cryptographic Pseudo-Random Number Generator (CPRNG) is always based on some hardness assumption, such as "factoring is hard" or "colliding the SHA-256 function is hard".
Quantum computers make some computational problems easier. That violates some of the old hardness assumptions. But not all of them.
For example, blum blum shub is likely broken by quantum computers, but no one knows how to break lattice-based cryptography with quantum computers. Showing you can break all classical CPRNGs with quantum computers is tantamount to showing that BQP=NP, which is not expected to be the case.
Even if quantum computers did break all classical CPRNGs, they happen to also fill that hole. They enable the creation of "Einstein-certified" random numbers.
I've read in Wikipedia that neural-network functions defined on a field of arbitrary real/rational numbers (along with algorithmic schemas, and the speculative `transrecursive' models) have more computational power than the computers we use today. Of course it was a page of russian wikipedia (ru.wikipedia.org) and that may be not properly proven, but that's not the only source of such.. rumors
Now, the thing that I really do not understand is: How can a string-rewriting machine (NNs are exactly string-rewriting machines just as Turing machines are; only programming language is different) be more powerful than a universally capable U-machine?
Yes, the descriptive instrument is really different, but the fact is that any function of such class can be (easily or not) turned to be a legal Turing-machine. Am I wrong? Do I miss something important?
What is the cause of people saying that? I do know that the fenomenum of undecidability is widely accepted today (though not consistently proven according to what I've read), but I do not really see a smallest chance of NNs being able to solve that particular problem.
Add-in: Not consistently proven according to what I've read - I meant that you might want to take a look at A. Zenkin's (russian mathematician) papers after mid-90-s where he persuasively postulates the wrongness of G. Cantor's concepts, including transfinite sets, uncountable sets, diagonalization method (method used in the proof of undecidability by Turing) and maybe others. Even Goedel's incompletness theorems were proven in right way in only 21-st century.. That's all just to plug Zenkin's work to the post cause I don't know how widespread that knowledge is in CS community so forgive me if that did look stupid.
Thank you!
From what little research I've done, most of these claims of trans-Turing systems, or of the incorrectness of Cantor's diagonalization proof, etc. are, shall we say, "controversial" in legitimate mathematical circles. Words like "crank" get thrown around frequently.
Obviously, the strong Church-Turing thesis remains unproven, but as you pointed out there's really no good reason to believe that artificial neural networks constitute computational capabilities beyond general recursion/UTMs/lambda calculus/etc.
From a theoretical viewpoint, I think you're absolutely correct -- neural networks provide very little that's new or different.
From a practical viewpoint, neural networks are simply a way of casting solutions into a form where parallel execution is natural and easy, whereas Turing machines are sequential in nature, and executing their sequences in parallel is relatively difficult. In fact, most of what's been done in CPU development over the last few decades has basically been figuring out ways to execute code in parallel while maintaining the illusion that it's executing in sequence. A lot of the hardware in a modern CPU is devoted to maintaining that illusion, and the degree to which parallel execution has become explicit is mostly an admission that maintaining the illusion has become prohibitively expensive.
Anyone who "proves" that Cantor's diagonal method doesn't work proves only their own incompetence. Cf. Wilfred Hodges' An editor recalls some hopeless papers for a surprisingly sympathetic explanation of what kind of thing is going wrong with these attempts.
You can provide speculative descriptions of hyper-Turing neural nets, just as you can provide speculative descriptions of other kinds of hyper-Turing computers: there's nothing incoherent in the idea that hypercomputation is possible, and speculative descriptions of mechanical hypercomputers have been made where the hypercomputer is stipulated to have infinitely fine engravings that encode an oracle for the Halting machine: the existence of such a machine is consistent with Newtonian mechanics, though not quantum mechanics. Rather, the Church-Turing thesis says that they can't be constructed, and there are two reasons to believe the Church-Turing thesis is correct:
No such machines have ever been constructed; and
There's work been done connecting models of physics to models of computation, going back to work in the early 1970s by Robin Gandy, with recent work by people such as David Deutsch (e.g., Machines, Logic and Quantum Physics and John Tucker (e.g., Computations via experiments with kinematic systems) which argues that physics doesn't support hypercomputation.
The main point is that the truth of the Church-Turing thesis is an empirical fact, and not a mathematical fact. It's one that we can have confidence is true, but not certainty.
From a layman's perspective, I see that
NNs can be more effective at solving some types problems than a turing machine, but they are not compuationally more powerful.
Even if NNs were provably more powerful than TMs, execution on current hardware would render them less powerful, since current hardware is only an apporximation of a TM and can only execute problems computable by a bounded TM.
You may be interested in S. Franklin and M. Garzon, Neural computability. There is a preview on Google. It discusses the computational power of neural nets and also states that it is rumored that neural nets are strictly more powerful than Turing machines.
I have recently been reading about the general use of prime factors within cryptography. Everywhere i read, it states that there is no 'PUBLISHED' algorithm which operates in polynomial time (as opposed to exponential time), to find the prime factors of a key.
If an algorithm was discovered or published which did operate in polynomial time, then how would this impact in the real world computing environment as opposed to the world of theory and computer science. Considering the extent we depend on cryptography would the would suddenly come to halt.
With this in mind if P = NP is true, what might happen, how much do we depend on the fact that it is yet uproved.
I'm a beginner so please forgive any mistakes in my question, but i think you'll get my general gist.
With this in mind if N = NP is true, would they ever tell us.
Who are “they”? If it were true, we would know. The computer scientists? That’s us. The cryptographers and mathematicians? The professionals? The experts? People like us. Users of the Internet, even of Stack Overflow.
We wouldn’t need being told. We’d tell.
Science and research isn’t done behind closed doors. If someone finds out that P = NP, this couldn’t be kept secret, simply because of the way that research is published. In principle, everyone has access to such research.
It depends on who discovers it.
NSA and other organizations that research cryptography under state sponsorship, contrary to Konrad's assertion, do research and science behind closed doors—and guns. And they have "scooped" published academic researchers on some important discoveries. Finally, they have a history of withholding cryptanalytic advances for years after they are independently discovered by academic researchers.
I'm not big into conspiracy theories. But I'd be very surprised if a lot of "black" money hasn't been spent by governments on the factorization problem. And if any results are obtained, they would be kept secret. A lot of criticism has been leveled at agencies in the U.S. for failing to coordinate with each other to avert terrorism. It might be that notifying the FBI of information gathered by the NSA would reveal "too much" about the NSA's capabilities.
You might find the first question posed to Bruce Schneier in this interview interesting. The upshot is that NSA will always have an edge over academia, but that margin is shrinking.
For what it is worth, the NSA recommends the use of elliptic curve Diffie-Hellman key agreement, not RSA encryption. Do they like the smaller keys? Are they looking ahead to quantum computing? Or … ?
Keep in mind that factoring is not known to be (and is conjectured not to be) NP-complete, thus demonstrating a P algorithm for factoring will not imply P=NP. Presumably we could switch the foundation of our encryption algorithms to some NP-complete problem instead.
Here's an article about P = NP from the ACM: http://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext
From the link:
Many focus on the negative, that if P
= NP then public-key cryptography becomes impossible. True, but what we
will gain from P = NP will make the
whole Internet look like a footnote in
history.
Since all the NP-complete optimization
problems become easy, everything will
be much more efficient. Transportation
of all forms will be scheduled
optimally to move people and goods
around quicker and cheaper.
Manufacturers can improve their
production to increase speed and
create less waste. And I'm just
scratching the surface.
Given this quote, I'm sure they would tell the world.
I think there were researchers in Canada(?) that were having good luck factoring large numbers with GPUs (or clusters of GPUs). It doesn't mean they were factored in polynomial time but the chip architecture was more favorable to factorization.
If a truly efficient algorithm for factoring composite numbers was discovered, I think the biggest immediate impact would be on e-commerce. Specifically, it would grind to a halt until a form of encryption was developed that doesn't rely on factoring being a one-way function.
There has been a lot of research into cryptography in the private sector for the past four decades. This was a big switch from the previous era, where crypto was largely in the purview of the military and secret government agencies. Those secret agencies definitely tried to resist this change, but once knowledge is discovered, it's very hard to keep it under wraps. With that in mind, I don't think a solution to the P = NP problem would remain a secret for long, despite any ramifications it might have in this one area. The potential benefits would be in a much wider range of applications.
Incidentally, there has been some research into quantum cryptography, which
relies on the foundations of quantum mechanics, in contrast to traditional public key cryptography which relies on the computational difficulty of certain mathematical functions, and cannot provide any indication of eavesdropping or guarantee of key security.
The first practical network using this technology went online in 2008.
As a side note, if you enter into the realm of quantum computing, you can factor in polynomial time. See Rob Pike's notes from his talk on quantum computing, page 25, also known as Shor's algorithm.