While googling I could find only feistel ciphers and didn't find any relevant information on non-feistel ciphers. Can someone suggest me some good non-feistel ciphers?
And yes this is homework.
There's way more than Feistel ciphers. :)
The simple answers: No stream ciphers, such as rc4, are Feistel ciphers. No Public Key ciphers, such as RSA or El Gamal are Feistel ciphers.
And the perhaps-surprising counter-example: Rijndael (the new AES), despite being a block cipher, isn't Feistel.
If you're really interested in Cryptography, I strongly recommend reading Handbook of Applied Cryptography, freely available and significantly better than most undergraduate texts. Schneier's "Applied Cryptography" is decent enough, an excellent introduction, but doesn't get into as much detail as one might like.
Rijndael, Square, Serpent, IDEA, Noekeon, etc. Wikipedia has a list of blockciphers, and the structure (Feistel, Feistel-like (unbalanced Feistel, e.g.), Substitution-Permutation network (SPN), etc. is mentioned in each lemma. SPN and Feistel are the most common, as the design makes it obvious that the function will be invertible. Designs other than these are rarer, but do occur. All the ciphers in the standards (like SSL/TLS, SSH, etc.) are of one of these 2 types.
I suggest putting in a little more effort. Even a cursory on-line search turns up definitions of "Feistel cipher", as well as descriptions of a wide variety of cipher procedures- It should not be too hard to tell which are clearly not Feistel ciphers.
I further recommend finding a good book on the subject, such as Bruce Schneier's "Applied Cryptography" (either edition).
Related
I have designed a (rather simple) network protocol which relies heavily on VLQ-coded integers and zig-zag coding (as described by the Protobuf). I learned about these techniques on the Internet and they felt quite obvious to me. However, now I want to compose an Internet-Draft describing my protocol, and it seems ridiculous to reference Wikipedia articles in a normative document.
Does somebody know a well-recognized publication, describing VLQ and zig-zag coding? Who is their original inventor?
I am implementing a custo crypto library using the Diffie-Hellman protocol (yes, i know about rsa/ssl/and the likes - i am using it specific purposes) and so far it turned out better than i original expected - using GMP, it's very fast.
My question is, besides the obvious key exchange part, if this protocol can be used for digital signatures as well.
I have looked at quite a few resources online, but so far my search has been fruitless.
Is this at all possible?
Any (serious) ideas are welcome.
Update:
Thanks for the comments. And for the more curious people:
my DH implementation is meant - among other things - to distribute encrypted "resources" to client-side applications. both are, for the most part, my own code.
every client has a DH key pair, and i use it along with my server's public key to generate the shared keys. in turn, i use them for HMACs and symmetric encryption.
DH keys are built anywhere from 128 up to 512 bits, using safe primes as modulus.
I realize how "pure" D-H alone can't be used for signatures, i was hoping for something close to it (or as simple).
It would appear this is feasible: http://www.quadibloc.com/crypto/pk050302.htm.
I would question why you are doing this though. The first rule of implementing crypto is don't implement crypto. There are plenty of libraries that already exist, you would probably be better off leveraging these, crypto code is notoriously hard to get right even if you understand the science behind it.
DSA is the standard way to make digital signatures based on the discrete logarithm problem.
And to answer a potential future question, Ephemeral-static Diffie-Hellman is the standard way to implement asymmetric encryption (to send messages where you know and trust the recipients public key (for example through a certificate), but the recipient does not know your key).
First of all, is this only possible on algorithms which have no side effects?
Secondly, where could I learn about this process, any good books, articles, etc?
COQ is a proof assistant that produces correct ocaml output. It's pretty complicated though. I never got around to looking at it, but my coworker started and then stopped using it after two months. It was mostly because he wanted to get things done quicker, but if you need to verify an algorithm this might be a good idea.
Here is a course that uses COQ and talks about proving algorithms.
And here is a tutorial about writing academic papers in COQ.
It's generally a lot easier to verify/prove correctness when no side effects are involved, but it's not an absolute requirement.
You might want to look at some of the documentation for a formal specification language like Z. A formal specification isn't a proof itself, but is often the basis for one.
I think that verifying the correctness of an algorithm would be validating its conformance with a specification. There is a branch of theoretical Computer Science called Formal Methods which may be what you are looking for if you need to get as close to proof as you can. From wikipedia,
Formal Methods are a particular kind
of mathematically-based techniques for
the specification, development and
verification of software and hardware
systems
You will be able to find many learning resources and tools from the multitude of links on the linked Wikipedia page and from the Formal Methods wiki.
Usually proofs of correctness are very specific to the algorithm at hand.
However, there are several well known tricks that are used and re-used again. For example, with recursive algorithms you can use loop invariants.
Another common trick is reducing the original problem to a problem for which your algorithm's proof of correctness is easier to show, then either generalizing the easier problem or showing that the easier problem can be translated to a solution to the original problem. Here is a description.
If you have a particular algorithm in mind, you may do better in asking how to construct a proof for that algorithm rather than a general answer.
Buy these books: http://www.amazon.com/Science-Programming-Monographs-Computer/dp/0387964800
The Gries book, Scientific Programming is great stuff. Patient, thorough, complete.
Logic in Computer Science, by Huth and Ryan, gives a reasonably readable overview of modern systems for verifying systems. Once upon a time people talked about proving programs correct - with programming languages which may or may not have side effects. The impression I get from this book and elsewhere is that real applications are different - for instance proving that a protocol is correct, or that a chip's floating point unit can divide correctly, or that a lock-free routine for manipulating linked lists is correct.
ACM Computing Surveys Vol 41 Issue 4 (October 2009) is a special issue on software verification. It looks like you can get to at least one of the papers without an ACM account by searching for "Formal Methods: Practice and Experience".
The tool Frama-C, for which Elazar suggests a demo video in the comments, gives you a specification language, ACSL, for writing function contracts and various analyzers for verifying that a C function satisfies its contract and safety properties such as the absence of run-time errors.
An extended tutorial, ACSL by example, shows examples of actual C algorithms being specified and verified, and separates the side-effect-free functions from the effectful ones (the side-effect-free ones are considered easier and come first in the tutorial). This document is also interesting in that it was not written by the designers of the tools it describe, so it gives a fresher and more didactic look at these techniques.
If you are familiar with LISP then you should definitely check out ACL2: http://www.cs.utexas.edu/~moore/acl2/acl2-doc.html
Dijkstra's Discipline of Programming and his EWDs lay the foundation for formal verification as a science in programming. A simpler work is Wirth's Systematic Programming, which begins with the simple approach to using verification. Wirth uses pre-ISO Pascal for the language; Dijkstra uses an Algol-68-like formalism called Guarded (GCL). Formal verification has matured since Dijkstra and Hoare, but these older texts may still be a good starting point.
PVS tool developed by Stanford guys is a specification and verification system. I worked on it and found it very useful for Theoram Proving.
WRT (1), you will probably have to create a model of the algorithm in a way that "captures" the side-effects of the algorithm in a program variable intended to model such state-based side-effects.
I have recently been reading about the general use of prime factors within cryptography. Everywhere i read, it states that there is no 'PUBLISHED' algorithm which operates in polynomial time (as opposed to exponential time), to find the prime factors of a key.
If an algorithm was discovered or published which did operate in polynomial time, then how would this impact in the real world computing environment as opposed to the world of theory and computer science. Considering the extent we depend on cryptography would the would suddenly come to halt.
With this in mind if P = NP is true, what might happen, how much do we depend on the fact that it is yet uproved.
I'm a beginner so please forgive any mistakes in my question, but i think you'll get my general gist.
With this in mind if N = NP is true, would they ever tell us.
Who are “they”? If it were true, we would know. The computer scientists? That’s us. The cryptographers and mathematicians? The professionals? The experts? People like us. Users of the Internet, even of Stack Overflow.
We wouldn’t need being told. We’d tell.
Science and research isn’t done behind closed doors. If someone finds out that P = NP, this couldn’t be kept secret, simply because of the way that research is published. In principle, everyone has access to such research.
It depends on who discovers it.
NSA and other organizations that research cryptography under state sponsorship, contrary to Konrad's assertion, do research and science behind closed doors—and guns. And they have "scooped" published academic researchers on some important discoveries. Finally, they have a history of withholding cryptanalytic advances for years after they are independently discovered by academic researchers.
I'm not big into conspiracy theories. But I'd be very surprised if a lot of "black" money hasn't been spent by governments on the factorization problem. And if any results are obtained, they would be kept secret. A lot of criticism has been leveled at agencies in the U.S. for failing to coordinate with each other to avert terrorism. It might be that notifying the FBI of information gathered by the NSA would reveal "too much" about the NSA's capabilities.
You might find the first question posed to Bruce Schneier in this interview interesting. The upshot is that NSA will always have an edge over academia, but that margin is shrinking.
For what it is worth, the NSA recommends the use of elliptic curve Diffie-Hellman key agreement, not RSA encryption. Do they like the smaller keys? Are they looking ahead to quantum computing? Or … ?
Keep in mind that factoring is not known to be (and is conjectured not to be) NP-complete, thus demonstrating a P algorithm for factoring will not imply P=NP. Presumably we could switch the foundation of our encryption algorithms to some NP-complete problem instead.
Here's an article about P = NP from the ACM: http://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext
From the link:
Many focus on the negative, that if P
= NP then public-key cryptography becomes impossible. True, but what we
will gain from P = NP will make the
whole Internet look like a footnote in
history.
Since all the NP-complete optimization
problems become easy, everything will
be much more efficient. Transportation
of all forms will be scheduled
optimally to move people and goods
around quicker and cheaper.
Manufacturers can improve their
production to increase speed and
create less waste. And I'm just
scratching the surface.
Given this quote, I'm sure they would tell the world.
I think there were researchers in Canada(?) that were having good luck factoring large numbers with GPUs (or clusters of GPUs). It doesn't mean they were factored in polynomial time but the chip architecture was more favorable to factorization.
If a truly efficient algorithm for factoring composite numbers was discovered, I think the biggest immediate impact would be on e-commerce. Specifically, it would grind to a halt until a form of encryption was developed that doesn't rely on factoring being a one-way function.
There has been a lot of research into cryptography in the private sector for the past four decades. This was a big switch from the previous era, where crypto was largely in the purview of the military and secret government agencies. Those secret agencies definitely tried to resist this change, but once knowledge is discovered, it's very hard to keep it under wraps. With that in mind, I don't think a solution to the P = NP problem would remain a secret for long, despite any ramifications it might have in this one area. The potential benefits would be in a much wider range of applications.
Incidentally, there has been some research into quantum cryptography, which
relies on the foundations of quantum mechanics, in contrast to traditional public key cryptography which relies on the computational difficulty of certain mathematical functions, and cannot provide any indication of eavesdropping or guarantee of key security.
The first practical network using this technology went online in 2008.
As a side note, if you enter into the realm of quantum computing, you can factor in polynomial time. See Rob Pike's notes from his talk on quantum computing, page 25, also known as Shor's algorithm.
I'm unclear about legal status of utilizing an algorithm from a published academic paper. Is there an implicit patent over that material? How about open source applications? Would it be ok to implement that algorithm in an open source app, with one of free software licenses?
Let's say I have access to paper A which describes algorithm B. How can I determine if I can use algorithm B in my commercial closed-source app C or open source app D? Is the answer always "no"? Is there an expiration date?
There's no such thing as an "implicit patent".
Unfortunately, I'd think you need to evaluate the IP restrictions for each paper.
One of the more famous situations where the algorithm described in an academic paper was ultimately encumbered by a patent as the RSA asymmetric encryption algorithm. A paper, "On digital signatures and public-key cryptosystems", was published in an ACM journal in 1977 describing the algorithm, a patent was awarded in 1983 (US Patent 4,405,829).
I have not read the paper, so I don't know if the application for a patent was mentioned - I do know that the algorithm was rather widely implemented, then when the patent was awarded MIT/RSADSI started enforcing it. The patent became a rather big issue with PGP, for example. MIT/RSADSI ultimately permitted free use of the patent for non-commercial use, I believe. A similar situation occurred with the LZW compression.
AFAIK, Having something published in a publicly-accessible way (e.g., academic paper) actually eliminates the possibility of a patent in Europe. In the US, there is a one year grace period from first publication to obtain a patent. Students and professors usually figure out the potential, and they apply to the university's technology transfer office that figures out what to do with it. This, for example, was the case with Captchas. If the paper is by commercial company (e.g., IBM), it is much more likely to get patented or to have additional protections because employees at Research are evaluated also based on number of patent applications.
The problem is that there is often no way to know because the patent lawyers will usually write a general name that has no contact to the original idea. So contacting the author just in case may be prudent. The author may also have an existing implementation available, often open-source.
This is what happened with GIF, the compression algorithm was published openly but without mentioning that it had been patented.
Even if the authors of the paper claim not to have patented it - there is no guarantee that another algorithm hasn't already been patented and a court might decide that it covers that work.
Remember as well that inventing it yourself doesn't protect you from patents - you can invent an algorithm that you have never seen in print and later discover it's covered by some patent.
As Michael said, there's no such thing as an implicit patent.
Generally, when an algorithm or any other research is published, the authors want to put it out into the world for other people to use. If you have access to a published paper which describes some algorithm, it's probably going to be okay for you to use it in any program you might create. However, some algorithms may be patented by their inventors before being published, and in that case if you want to use the algorithm you would have to come to some arrangement with the patent holder, regardless of whether your application is open-source or not. Basically to be safe, you need to check the specific restrictions on that algorithm. (The type of people who publish papers about algorithms are often more likely to let you use their algorithms if your application is open source than if it's proprietary.)