I have recently been reading about the general use of prime factors within cryptography. Everywhere i read, it states that there is no 'PUBLISHED' algorithm which operates in polynomial time (as opposed to exponential time), to find the prime factors of a key.
If an algorithm was discovered or published which did operate in polynomial time, then how would this impact in the real world computing environment as opposed to the world of theory and computer science. Considering the extent we depend on cryptography would the would suddenly come to halt.
With this in mind if P = NP is true, what might happen, how much do we depend on the fact that it is yet uproved.
I'm a beginner so please forgive any mistakes in my question, but i think you'll get my general gist.
With this in mind if N = NP is true, would they ever tell us.
Who are “they”? If it were true, we would know. The computer scientists? That’s us. The cryptographers and mathematicians? The professionals? The experts? People like us. Users of the Internet, even of Stack Overflow.
We wouldn’t need being told. We’d tell.
Science and research isn’t done behind closed doors. If someone finds out that P = NP, this couldn’t be kept secret, simply because of the way that research is published. In principle, everyone has access to such research.
It depends on who discovers it.
NSA and other organizations that research cryptography under state sponsorship, contrary to Konrad's assertion, do research and science behind closed doors—and guns. And they have "scooped" published academic researchers on some important discoveries. Finally, they have a history of withholding cryptanalytic advances for years after they are independently discovered by academic researchers.
I'm not big into conspiracy theories. But I'd be very surprised if a lot of "black" money hasn't been spent by governments on the factorization problem. And if any results are obtained, they would be kept secret. A lot of criticism has been leveled at agencies in the U.S. for failing to coordinate with each other to avert terrorism. It might be that notifying the FBI of information gathered by the NSA would reveal "too much" about the NSA's capabilities.
You might find the first question posed to Bruce Schneier in this interview interesting. The upshot is that NSA will always have an edge over academia, but that margin is shrinking.
For what it is worth, the NSA recommends the use of elliptic curve Diffie-Hellman key agreement, not RSA encryption. Do they like the smaller keys? Are they looking ahead to quantum computing? Or … ?
Keep in mind that factoring is not known to be (and is conjectured not to be) NP-complete, thus demonstrating a P algorithm for factoring will not imply P=NP. Presumably we could switch the foundation of our encryption algorithms to some NP-complete problem instead.
Here's an article about P = NP from the ACM: http://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext
From the link:
Many focus on the negative, that if P
= NP then public-key cryptography becomes impossible. True, but what we
will gain from P = NP will make the
whole Internet look like a footnote in
history.
Since all the NP-complete optimization
problems become easy, everything will
be much more efficient. Transportation
of all forms will be scheduled
optimally to move people and goods
around quicker and cheaper.
Manufacturers can improve their
production to increase speed and
create less waste. And I'm just
scratching the surface.
Given this quote, I'm sure they would tell the world.
I think there were researchers in Canada(?) that were having good luck factoring large numbers with GPUs (or clusters of GPUs). It doesn't mean they were factored in polynomial time but the chip architecture was more favorable to factorization.
If a truly efficient algorithm for factoring composite numbers was discovered, I think the biggest immediate impact would be on e-commerce. Specifically, it would grind to a halt until a form of encryption was developed that doesn't rely on factoring being a one-way function.
There has been a lot of research into cryptography in the private sector for the past four decades. This was a big switch from the previous era, where crypto was largely in the purview of the military and secret government agencies. Those secret agencies definitely tried to resist this change, but once knowledge is discovered, it's very hard to keep it under wraps. With that in mind, I don't think a solution to the P = NP problem would remain a secret for long, despite any ramifications it might have in this one area. The potential benefits would be in a much wider range of applications.
Incidentally, there has been some research into quantum cryptography, which
relies on the foundations of quantum mechanics, in contrast to traditional public key cryptography which relies on the computational difficulty of certain mathematical functions, and cannot provide any indication of eavesdropping or guarantee of key security.
The first practical network using this technology went online in 2008.
As a side note, if you enter into the realm of quantum computing, you can factor in polynomial time. See Rob Pike's notes from his talk on quantum computing, page 25, also known as Shor's algorithm.
Related
NP problems look like they are suitable for use as trapdoor functions or proofs of work, since they are difficult to solve, but easy to verify. Unfortunately, they seem a little hard to use in adversarial settings where an opponent can control problem selection because while worst-case problems are NP, particular instances can be solved very quickly.
So: is there any algorithm which can take instances and estimate - more efficiently than trying to solve them - how hard or close to worst-case they are?
(The context is musing about a Bitcoin protocol where the proofs-of-work were reusable and not useless hash checks. The obvious approach is to have a central authority issue, for each transaction block, a NP instance which corresponds to a real-world problem. But the central authority could be subverted, and start issuing easy problems which would render the network vulnerable to double-spends. One could accept problems from multiple authorities, or anyone, but the chosen-easy problem remains. If there were some way to estimate the difficulty of any problem presented to the network, then 'too easy' problems could simply be ignored, falling back to the hash race if necessary.)
EDIT: jaxtr links me to "Predicting Satisfiability at the Phase Transition", which gives algorithms which estimate hardness at 70% accuracy - but they don't seem to investigate whether the algorithm can be deliberately fooled. (As well, one can apparently generate SAT problems with specified probabilities of being satisfiable.)
This is the same problem faced by researchers trying to create public-key encryption algorithms based on np-completeness. As far as I know, there are some stabs at it, but it's still an open problem. See the discussion here: Are there public key cryptography algorithms that are provably NP-hard to defeat?
I know I've seen more recent work, but can't find it offhand. I recall a book composed of articles about alternative cryptosystems should factorization suddenly become cheap, and I'll try to dig up the link.
Edit: The comment below points to the book I was thinking of. The website has lots of good references to various relevant papers. See the "code based" section in particular.
I've read in Wikipedia that neural-network functions defined on a field of arbitrary real/rational numbers (along with algorithmic schemas, and the speculative `transrecursive' models) have more computational power than the computers we use today. Of course it was a page of russian wikipedia (ru.wikipedia.org) and that may be not properly proven, but that's not the only source of such.. rumors
Now, the thing that I really do not understand is: How can a string-rewriting machine (NNs are exactly string-rewriting machines just as Turing machines are; only programming language is different) be more powerful than a universally capable U-machine?
Yes, the descriptive instrument is really different, but the fact is that any function of such class can be (easily or not) turned to be a legal Turing-machine. Am I wrong? Do I miss something important?
What is the cause of people saying that? I do know that the fenomenum of undecidability is widely accepted today (though not consistently proven according to what I've read), but I do not really see a smallest chance of NNs being able to solve that particular problem.
Add-in: Not consistently proven according to what I've read - I meant that you might want to take a look at A. Zenkin's (russian mathematician) papers after mid-90-s where he persuasively postulates the wrongness of G. Cantor's concepts, including transfinite sets, uncountable sets, diagonalization method (method used in the proof of undecidability by Turing) and maybe others. Even Goedel's incompletness theorems were proven in right way in only 21-st century.. That's all just to plug Zenkin's work to the post cause I don't know how widespread that knowledge is in CS community so forgive me if that did look stupid.
Thank you!
From what little research I've done, most of these claims of trans-Turing systems, or of the incorrectness of Cantor's diagonalization proof, etc. are, shall we say, "controversial" in legitimate mathematical circles. Words like "crank" get thrown around frequently.
Obviously, the strong Church-Turing thesis remains unproven, but as you pointed out there's really no good reason to believe that artificial neural networks constitute computational capabilities beyond general recursion/UTMs/lambda calculus/etc.
From a theoretical viewpoint, I think you're absolutely correct -- neural networks provide very little that's new or different.
From a practical viewpoint, neural networks are simply a way of casting solutions into a form where parallel execution is natural and easy, whereas Turing machines are sequential in nature, and executing their sequences in parallel is relatively difficult. In fact, most of what's been done in CPU development over the last few decades has basically been figuring out ways to execute code in parallel while maintaining the illusion that it's executing in sequence. A lot of the hardware in a modern CPU is devoted to maintaining that illusion, and the degree to which parallel execution has become explicit is mostly an admission that maintaining the illusion has become prohibitively expensive.
Anyone who "proves" that Cantor's diagonal method doesn't work proves only their own incompetence. Cf. Wilfred Hodges' An editor recalls some hopeless papers for a surprisingly sympathetic explanation of what kind of thing is going wrong with these attempts.
You can provide speculative descriptions of hyper-Turing neural nets, just as you can provide speculative descriptions of other kinds of hyper-Turing computers: there's nothing incoherent in the idea that hypercomputation is possible, and speculative descriptions of mechanical hypercomputers have been made where the hypercomputer is stipulated to have infinitely fine engravings that encode an oracle for the Halting machine: the existence of such a machine is consistent with Newtonian mechanics, though not quantum mechanics. Rather, the Church-Turing thesis says that they can't be constructed, and there are two reasons to believe the Church-Turing thesis is correct:
No such machines have ever been constructed; and
There's work been done connecting models of physics to models of computation, going back to work in the early 1970s by Robin Gandy, with recent work by people such as David Deutsch (e.g., Machines, Logic and Quantum Physics and John Tucker (e.g., Computations via experiments with kinematic systems) which argues that physics doesn't support hypercomputation.
The main point is that the truth of the Church-Turing thesis is an empirical fact, and not a mathematical fact. It's one that we can have confidence is true, but not certainty.
From a layman's perspective, I see that
NNs can be more effective at solving some types problems than a turing machine, but they are not compuationally more powerful.
Even if NNs were provably more powerful than TMs, execution on current hardware would render them less powerful, since current hardware is only an apporximation of a TM and can only execute problems computable by a bounded TM.
You may be interested in S. Franklin and M. Garzon, Neural computability. There is a preview on Google. It discusses the computational power of neural nets and also states that it is rumored that neural nets are strictly more powerful than Turing machines.
I'm unclear about legal status of utilizing an algorithm from a published academic paper. Is there an implicit patent over that material? How about open source applications? Would it be ok to implement that algorithm in an open source app, with one of free software licenses?
Let's say I have access to paper A which describes algorithm B. How can I determine if I can use algorithm B in my commercial closed-source app C or open source app D? Is the answer always "no"? Is there an expiration date?
There's no such thing as an "implicit patent".
Unfortunately, I'd think you need to evaluate the IP restrictions for each paper.
One of the more famous situations where the algorithm described in an academic paper was ultimately encumbered by a patent as the RSA asymmetric encryption algorithm. A paper, "On digital signatures and public-key cryptosystems", was published in an ACM journal in 1977 describing the algorithm, a patent was awarded in 1983 (US Patent 4,405,829).
I have not read the paper, so I don't know if the application for a patent was mentioned - I do know that the algorithm was rather widely implemented, then when the patent was awarded MIT/RSADSI started enforcing it. The patent became a rather big issue with PGP, for example. MIT/RSADSI ultimately permitted free use of the patent for non-commercial use, I believe. A similar situation occurred with the LZW compression.
AFAIK, Having something published in a publicly-accessible way (e.g., academic paper) actually eliminates the possibility of a patent in Europe. In the US, there is a one year grace period from first publication to obtain a patent. Students and professors usually figure out the potential, and they apply to the university's technology transfer office that figures out what to do with it. This, for example, was the case with Captchas. If the paper is by commercial company (e.g., IBM), it is much more likely to get patented or to have additional protections because employees at Research are evaluated also based on number of patent applications.
The problem is that there is often no way to know because the patent lawyers will usually write a general name that has no contact to the original idea. So contacting the author just in case may be prudent. The author may also have an existing implementation available, often open-source.
This is what happened with GIF, the compression algorithm was published openly but without mentioning that it had been patented.
Even if the authors of the paper claim not to have patented it - there is no guarantee that another algorithm hasn't already been patented and a court might decide that it covers that work.
Remember as well that inventing it yourself doesn't protect you from patents - you can invent an algorithm that you have never seen in print and later discover it's covered by some patent.
As Michael said, there's no such thing as an implicit patent.
Generally, when an algorithm or any other research is published, the authors want to put it out into the world for other people to use. If you have access to a published paper which describes some algorithm, it's probably going to be okay for you to use it in any program you might create. However, some algorithms may be patented by their inventors before being published, and in that case if you want to use the algorithm you would have to come to some arrangement with the patent holder, regardless of whether your application is open-source or not. Basically to be safe, you need to check the specific restrictions on that algorithm. (The type of people who publish papers about algorithms are often more likely to let you use their algorithms if your application is open source than if it's proprietary.)
Can quantum algorithms be useful?
Has any one been successful in putting quantum algorithms to any use?
"Quantum algorithms" are algorithms to be run on quantum computers.
There are things that can be done quickly in the quantum computation model that are not known (or believed) to be possible with classical computation: Discrete logarithm and Integer factorisation (see Shor's algorithm) are in BQP, but not believed to be in P (or BPP). Thus when/if a quantum computer is built, it is known that it can break RSA and most current cryptography.
However,
quantum computers cannot (are not believed to, I mean) solve NP-complete problems in polynomial time, and more importantly,
no one has built a quantum computer yet, and it is not even clear if it will be possible to build one -- avoiding decoherence, etc. (There have been claims of quantum computers with a limited number of qubits -- 5 to 10, but clearly they are not useful for anything much.)
"Well, there's a quantum computer that can factor 15, so those of you using
4-bit RSA should worry." -- Bruce Schneier
[There is also the idea of quantum cryptography, which is cryptography over a quantum channel, and is something quite different from quantum computation.]
The only logical answer is that they are both useful and not useful. ;-)
My understanding is that current quantum computing capabilities can be used to exchange keys securely. The exchanged keys can then be used to perform traditional cryptography.
As far i know about Quantum Computing and Algorithms.I seen pretty much usage of Quantum Algorithms in Cryptography.If you are really interested in Cryptography do please check on those things.Basically all matter is how well you know basics of Quantum mechanics and discrete mathematics. Eg: you must be seeing difficult algorithms like Shor's Algorithm ,this basically integer factorization.Basically integer factorization is easy thing using normal algorithms Algebraic-group factorization algorithm,Fermat's factorization method..etc but when its comes to Quantum Computing its totally different,you are running the things in Quantum computers so algorithm changes and we have to use the algorithms like Shor's etc.
Basically make good understanding about Quantum Computing and then look Quantum Algorithms
There is also some research into whether quantum computing can be used to solve hard problems, such as factoring large numbers (if this was feasible it would break current encryption techniques).
Stackoverflow runs on a quantum computer of sorts.
Feynman has implied the possibility that quantum probability is the source of human creativity.
Individuals in the crowd present answers and vote on them with only a probability of being correct. Only by sampling the crowd many times can the probability be raised to a confident level.
So maybe Stackoverflow examplifies a successful quantum algorithm implementation.
What do you think?
One good use of a quantum device that can be done in current technology is a random number generator.
Generating truly random bits is an important cryptographic primitive, and is used, for example, in the RSA algorithm to generate the private key. In our PC the random number generator isn't random at all in the sense that the source has no entropy in it, and therefore isn't really random at all.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Since Graduating from a very small school in 2006 with a badly shaped & outdated program (I'm a foreigner & didn't know any better school at the time) I've come to realize that I missed a lot of basic concepts from a mathematical & software perspective that are mostly the foundations of other higher concepts.
I.e. I tried to listen/watch the open courseware from MIT on Introduction to Algorithms but quickly realized I was missing several mathematical concepts to better understand the course.
So what are the core mathematical concepts a good software engineer should know? And what are the possible books/sites you will recommend me?
Math for Programmers. A good read.
Boolean algebra is fundamental to understanding control structures and refactoring. For example, I've seen many bugs caused by programmers who didn't know (or couldn't use) deMorgan's law. As another example, how many programmers immediately recognize that
if (condition-1) {
if (condition-2) {
action-1
} else {
action-2
} else {
action-2
}
can be rewritten as
if (condition-1 and condition-2) {
action-1
} else {
action-2
}
Discrete mathematics and combinatorics are tremendously helpful in understanding the performance of various algorithms and data structures.
As mentioned by Baltimark, mathematical induction is very useful in reasoning about loops and recursion.
Set theory is the basis of relational databases and SQL.
By way of analogy, let me point out that carpenters routinely use a variety of rule-of-thumb techniques in constructing things like roofs and stairs. However, a knowledge of geometry allows you to solve problems for which you don't have a "canned" rule of thumb. It's like learning to read via phonetics versus sight-recognition of a basic vocabulary. 90+% of the time there's not much difference. But when you run into an unfamiliar situation, it's VERY nice to have the tools to work out the solution yourself.
Finally, the rigor/precision required by mathematics is very useful preparation for programming, regardless of specific technique. Again, many of the bugs in programming (or even specifications) that I've seen in my career have sloppy thinking at their root cause.
I would go with the fields that Landon stated:
Discrete Math, Linear Algebra,
Combinatorics, Probability and
Statistics, Graph Theory
and add mathematical logic.
This would give you a grip on most fields of CS. If you want to go into special fields, you have to dive into some areas especially:
Computer graphics -> Linear Algebra
Gaming -> Linear Algebra, Physics
Computer Linguistics -> Statistics, Graph Theory
AI -> Statistics, Stochastics, Logic, Graph Theory
In order of importance:
Counting (needed for loops)
Addition, subtraction, multiplication, division.
Algebra (only really required to understand the use of variables).
Boolean algebra, boolean logic and binary.
Exponents and logarithms (i.e. understand O(n) notation).
Anything more advanced than that is usually algorithm-specific or domain-specific. Depending on which areas you are interested in, the following may also be relevant:
Linear algebra and trigonometry (3D visualization)
Discrete mathematics and set theory (database design, algorithm design, compiler design).
Statistics (well, for statistical and/or scientific/economic applications. possibly also useful for algorithm design).
Physics (for simulations).
Understanding functions is also useful (don't remember what the mathematical term is for that area), but if you know how to program you probably already do.
My point being: A ten year old should know enough mathematics to be able to understand programming. There isn't really much math required for the basic understanding of things. It's all about the logic, really.
"Proof by induction" is a core mathematical concept for programmers to know.
Big O notation in general algorithm analysis, and in relation to standard collections (sorting, retrieval insertion and deletion)
For discrete math, here is an awesome set of 20 lectures from Arsdigita University. Each is about an hour and twenty minutes long.
Start with what we CS folks call "discrete math". Calculus and linear algebra can come in quite handy too because they get your foot in the door to a lot of application domains. Once you've mastered those three, go for probability theory. Those 4 will get you to competency in 95% (I made that up) of application domains.
Concrete Mathematics covers most of the major topics. A good book on Discrete Math, like Rosen's Discrete Mathematics and Its Applications, will fill in any gaps.
I think it depends on your focus. A few years ago I purchased the set of Art of Computer Programming by Donald Knuth. After looking at the books I realized pretty much everything is calculus proofs. If you're interested in developing your own generic algorithms and proofs for them, then I recommend being able to understand the above books since its what you'd be dealing with in that world. On the other hand if you only want/need to use various sorting/searching/tree/etc... routines then big O notation at a minimum, boolean math, and general algebra will be fine. If you're dealing with 3D then geometry and trig as well.
I tend to be more on the using side than making proofs, and while I'd like to think I've done some clever things over the years I've never sat down and developed a new sorting routine. The best advice I can give is learn what you need for your field, but expose yourself to higher levels so you know it exists and how much more there is to learn, you won't get much growth otherwise.
I would say boolean logic. AND, OR, XOR, NOT.
I found as programmer we use this more often than the rest of math concepts.
Basic Algebra and Statistics are good starting points, and the foundation for a lot of other fields.
Here is a simple one that baffles me when I see developers that don't understand it:
- Order of Operations
Chapter 1 of "The Art of Computer Programming" aims to provide exactly this.
There was a book that was recommended...the title was something like Concrete Mathematics. It was recommended in a few questions.
Back in school, on of my instructors said for business applications, all you need to know know add, subtract, multiply, and divide. All other formulas the requester will know and inform you what is needed. Now realize that this is for financing reporting and application focused school. To this day, this has held true for me. I have never once needed to know more than that.
Check the book Foundations of Computer Science
This book is authored by: Al Aho and Jeff Ullman and the entire book is available online.
This is what the authors say in their Preface about the goal of this book:
"Foundations of Computer Science covers subjects that are often found split
between a discrete mathematics course and a sophomore-level sequence in computer
science in data structures. It has been our intention to select the mathematical
foundations with an eye toward what the computer user really needs, rather than
what a mathematician might choose."
a site for brushing up on Math:
http://www.khanacademy.org/
My math background is really poor (Geologist by training), but I took a discrete math class in high school and I use the concepts every day as a programmer. It is probably the most valuable class I took in all of my education as it relates to my current profession.
Discrete Math
Linear Algebra
Combinatorics
Probability and Statistics
Graph Theory
Boolean Algebra
Set Theory
Discrete Math
Well, that depends on what you goal is. As someone said, Linear Algebra, Combinatorics, Probability and Statistics and Graph Theory are important if you're into solving hard problems. Asymptotic growth of functions (bit-Oh notation) is very important. You will also need to master summations and series if you need to work on analyzing some more complex algorithms (see the appendix on Cormen&others Intro to Algorithms).
Even if you're into "Java for the enterprise" or "server-side PHP", you will find some Statistics and Algorithm Complexity (hence combinatorics, induction, summations, series, etc) useful when your boss wants you to get the server to work faster, and adding new hardware doesn't seem to help. :-) I've been through that once.
Boolean Algebra
Set Theory
Why is everybody including probability and statistics in the gold list without mentioning calculus? One cannot understand what probability and statistics are about without at least a working knowledge of limits, derivatives, integrals and series. And all in all, calculus (together with linear algebra) is the workhorse of all mathematics.
I think algorithms and theory are of great importance. Being able to come up with a fast, and correct solution is what differentiates good programmers from the rest. Also, being able to prove your algorithm (using standard proof techniques-- induction, contradiction, etc) is equally important.
Yeah, I would say a basic understanding of induction helps so that you understand what n represents in algorithms. Also some Logic and Discrete Structures is helpful.
Probability and Statistics are very helpful if you ever have to do anything resembling machine learning.
I cover the basics in my "Computing Your Skill" blog post where I discuss how Xbox Live's TrueSkill ranking and matchmaking algorithm works.