Locking data for x days - algorithm

Is there an (easy) way to encrypt data so that it takes a certain amount of cpu hours to decrypt it? Maybe a series of encryptions with short key lengths, a variable one-way function or anything?
It's probably not of great use, but how would this encryption scheme be called and are there tools for it?
edit:
To get no varying results for the brute force break time, shouldn't I use many rounds with an xor-feedback?
I just came up with this algo (for a symmetric block cipher with equal value and key length)... maybe it's non-sense
round 1
create a zero-block
create a random-block-1
encipher value:zero-block with key:random-block1 => gives lock-output-1
round 2
create a zero-block
create a random-block-2
encipher value:zero-block with key:random-block2 => gives temp
xor temp with random-block-1 => gives lock-output-2
and so on
The xor operation with random-block-1 would be there so that the unlock routine will have to find random-block-1 before it can start brute forcing on lock-output-2.
lock-output-1 + lock-output-2 .. lock-output-N would be the complete lock-output. When the unlock routine has found N key-blocks that each give zero on all lock-output blocks, it can use the N key-blocks as a whole to decipher the actual data.
Then I'd also need a formula to calculate how many rounds would give a maximum variation of e.g. 10% for the wanted amount of CPU hours.
I guess there must exist a simmilar algorithm out there.

The concept is called timed commitment, as defined by Boneh and Naor. The data you want to encrypt is said to be committed by one party (which I call the sender), such that another party (the receiver) may, at some tunable cost, recover the data.
The method described by Boneh and Naor is quite more advanced than what you suggest. Their timed commitment scheme has the three following properties:
Verifiable recovery: the sender is able to convince the receiver that he really did commit a proper value which the receiver will be able to recover by applying a substantial but feasible amount of CPU muscle to it.
Recovery with proof: once the recovery has been done, it is verifiable efficiently: a third party wishing to verify that the recovered value is indeed the one which was committed, can do so efficiently (without applying hours of CPU to it).
Immunity against parallel attacks: the recovery process cannot benefit from having access to a thousand PC: one cannot go much faster than what can be done with a single CPU.
With these properties, a timed commitment becomes a worthwhile tool in some situations; Boneh and Naor mainly discuss contract signing, but also honesty preserving auctions and a few other applications.
I am not aware of any actual implementation or even a defined protocol for timed commitments, beyond the mathematical description by Boneh and Naor.

You can encrypt it normally, and release just enough information about the key such that a brute force attack will take X CPU hours.

No, you can't do that reliably, because
the attacker could rent a powerful computer (a computing cloud for example) and use it for highly parsllel much faster attack
so far computers become faster and faster as time passes - what took a day yesterday might take one minute in two years

Well, to know the amount of CPU hours for any kind of decryption, it does not really matter how the encryption takes place. Instead you would have to make sure
what decryption algorithm the decrypter will use (perhaps a non-so-far invented one?)
which implementation of that algorithm he will use
which CPU/hardware he will use.
Each of these 3 parameters can make a difference in speed of at least a factor 1000 or more.

A cryption algorithm is considered as cracked when someone found a way to get the password faster than a brute force attack (in average).
It's the case for some algorithms like MD5 so make sure you pick one algorithm that isn't cracked (yet)
For other algorithms, even if they are not cracked, they are still vulnerable to brute force attacks... it might take a while but everything that is crypted might be decrypted... it's only a question of time and resources.
If a guy have a huge zombie computer farm working for him around the world, it might take few hours to crack something that would take years for a guy with a single laptop.
If you want a maximum of security, you can mix a couple of existing cryption algoritm with custom algorithm of your own. Someone can still try to crack your data, but most likely, unless you are dealing with national top secret data, it will probably never append.

It is relative, a computer is going to decrypt fast depending on its computing power, and the selected algorithm to encrypt depends on the data you want to protect, so with a normal computer a good encryption algorithm an average computer takes its time to decrypt cause there always is a price for good things, but i recommend you Elliptic curve cryptography cause it has power to encrypt and its time to be decrypted is very good, you can take a look on it.
that is what i can say about it.-

Related

What is the output of a fingerprint scanner? Is there any deterministic identifying information?

I am planning on generating a set of public/private keys from a deterministic identifying piece of information from a person and was planning on using fingerprints.
My question, therefore, is: what is the output of a fingerprint scanner? Is there any deterministic output I could use, or is it always going to be a matter of "confidence level"? i.e. Do I always get a "number" which, if matched exactly to the database, will allow access, or do I rather get a number which, if "close enough" to the stored value on the database, allows access, based on a high degree of confidence, rather than an exact match?
I am quite sure the second option is the answer but just wanted to double-check. Is there any way to get some sort of deterministic output? My hope was to re-generate keys every time rather than actually storing fingerprint data. That way a wrong fingerprint would simply generate a new and useless key.
Any suggestions?
Thanks in advance.
I would advise against it for several reasons.
The fingerprints are not entirely deterministic. As suggested in #ImSimplyAnna answer, you might 'round' the results in order to have more chances to obtain a deterministic result. But that would significantly reduce the number of possible/plausible fingerprints, and thus not meet the search space size requirement for a cryptographic algorithm. On top of it, I suspect the entropy of such result to be somehow low, compared to the requirements of modern algorithm which are always based on high quality random numbers.
Fingerprints are not secret, we expose them to everyone all the time, and they can be revealed to an attacker at any time, and stored in a picture using a simple camera. A key must be a secret, and the only place we know we can store secrets without exposing them is our brain (which is why we use passwords).
An important feature for cryptographic keys is the possibility to generate new one if there is a reason to believe the current ones might be compromised. This is not possible with fingerprints.
That is why I would advise against it. Globally, I discourage anyone (myself included) to write his/her own cryptographic algorithm, because it is so easy to screw them up. It might be the easiest thing to screw up, out of all the things you could write, because attacker are so vicicous!
The only good approach, if you're not a skilled specialist, is to use libraries that are used all around, because they've been written by experts on the matter, and they've been subject to many attacks and attempts to break them, so the ones still standing will offer much better levels of protection that anything a non specialist could write (or basically anything a single human could write).
You can also have a look at this question, on the crypto stack exchange. They also discourage the OP in using anything else than a battle hardened algorithm, or protocol.
Edit:
I am planning on generating a set of public/private keys from a
deterministic identifying piece of information
Actually, It did not strike me at first (it should have), but keys MUST NOT be generated from anything which is not random. NEVER.
You have to generate them randomly. If you don't, you already give more information to the attacker than he/she wants. Being a programmer does not make you a cryptographer. Your user's informations are at stake, do not take any chance (and if you're not a cryptographer, you actually don't stand any).
A fingerprint scanner looks for features where the lines on the fingerprint either split or end. It then calculates the distances and angles between such features in an attempt to find a match.
Here's some more reading on the subject:
https://www.explainthatstuff.com/fingerprintscanners.html
in the section "How fingerprints are stored and compared".
The source is the best explanation I can find, but looking around some more it seems that all fingerprint scanners use some variety of that algorithm to generate data that can be matched.
Storing raw fingerprints would not only take up way more space on a database but also be a pretty significant security risk if that information was ever leaked, so it's not really done unless absolutely necessary.
Judging by that algorithm, I would assume that there is always some "confidence level". The angles and distances will never be 100% equal between scans, so there has to be some leeway to make sure a match is still found even if the finger is pressed against the scanner a bit harder or the finger is at a slightly different angle.
Based on this, I'd assume that generating a key pair based on a fingerprint would be possible, if you can figure out a way to make similar scans result in the same information. Simply rounding the angles and distances may work, but may introduce cases where two different people generate the same key pairs, or cases where different scans of the same fingerprint have a high chance of generating several different keys.

Suggest proof of work algorithm that can be used to control the growth of the blockchain

I'm working on a blockchain based identity system. And, since each item will be in the chain forever, consuming space, I'm thinking on adding a proof of work requirement to add items to the chain.
At first I was thinking of bitcoin, since it's a tried and tested way to prove that the work was done, but doing it this way would prevent users from joining in, since bitcoin is not widely adapted yet. Also, in a distributed system, it is not clear who should get the money.
So, I'm looking for a proof of work algorithm, complexity of which can be easily adjusted based on blockchain growth speed, as well as something that would be hard to be re-used. Also, if complexity would had grown since the work has been started, the work should be able to be completed with adjusted complexity without having to be re-done.
Can someone suggest to me something that would work for my purpose, as well as would be resistant to GPU acceleration?
Simple... burn bitcoins. Anyone can do it - so there's no barrier to entry, and really what you need is "proof of destroyed value". Because the value is destroyed, you know the miner's incentives are to strengthen your chain.
Invent a bitcoin address that cannot be real, but checksums correctly. Then have your miners send to that burn address, with a public key in OP-return. Doing so earns them the right to mine during some narrow window of time.
"Difficulty" is adjusted by increasing the amount of bitcoins burned. Multiple burns in the same window can get reward shared, but only one block is elected correct (that with a checksum closest to the checksum of all of the valid burns for the window).

Password hashing algorithm that will keep password safe even from supercomputers?

I was researching about how MD5 is known to have collisions, So its not secure enough. I am looking for some hashing algorithm that even super computers will take time to break.So can you tell me what hashing algorithm will keep my passwords safe for like next coming 20 years of super computing advancement.
Use a key derivation function with a variable number of rounds, such as bcrypt.
The passwords you encrypt today, with a hashing difficulty that your own system can handle without slowing down, will always be vulnerable to the faster systems of 20 years in the future. But by increasing the number of rounds gradually over time you can increase the amount of work it takes to check a password in proportion with the increasing power of supercomputers. And you can apply more rounds to existing stored passwords without having to go back to the original password.
Will it hold up for another 20 years? Difficult to say: who knows what crazy quantum crypto and password-replacement schemes we might have by then? But it certainly worked for the last 10.
Note also that entities owning supercomputers and targeting particular accounts are easily going to have enough power to throw at it that you can never protect all of your passwords. The aim of password hashing is to mitigate the damage from a database leak, by limiting the speed at which normal attackers can recover passwords, so that as few accounts as possible have already been compromised by the time you've spotted the leak and issued a notice telling everyone to change their passwords. But there is no 100% solution.
As someone else said, what you're asking is practically impossible to answer. Who knows what breakthroughs will be made in processing power over the next twenty years? Or mathematics?
In addition you aren't telling us many other important factors, including against which threat models you aim to protect. For example, are you trying to defend against an attacker getting a hold of a hashed password database and doing offline brute-forcing? An attacker with custom ASICs trying to crack one specific password? Etc.
With that said, there are things you can do to be as secure and future-proof as possible.
First of all, don't just use vanilla cryptographic hash algorithms; they aren't designed with your application in mind. Indeed they are designed for other applications with different requirements. For one thing, they are fast because speed is an important criterion for a hash function. And that works against you in this case.
Additionally some of the algorithms you mention, like MD5 or SHA1 have weaknesses (some theoretical, some practical) and should not be used.
Prefer something like bcrypt, an algorithm designed to resist brute force attacks by being much slower than a general purpose cryptographic hash whose security can be “tuned” as necessary.
Alternatively, use something like PBKDF2 which is. Designed to run a password through a function of your choice a configurable number of times along with a salt, which also makes brute forcing much more difficult.
Adjust the iteration count depending on your usage model, keeping in mind that the slower it is, the more security against brute-force you have.
In selecting a cryptographic hash function for PBKDF, prefer SHA-3 or, if you can't use that, prefer one of the long variants of SHA-2: SHA-384 or SHA-512. I'd steer clear of SHA-256 although I don't think there's an issue with it in this scenario.
In any case, use the largest possible and best salt you can; I'd suggest that you use a good cryptographically secure PRNG and never use a salt less than 64 bits (note: that I am talking about the length of the salt generated, not the value returned).
Will these recommendations help 20 years down the road? Who knows - I'd err on the side of caution and say "no". But if you need security for that long a timeframe, you should consider using something other than passwords.
Anyways, I hope this helps.
Here are two pedantic answers to this question:
If P = NP, there is provably no such hash function (and vice versa, incidentally). Since it has not been proven that P != NP at the time of this writing, we cannot make any strong guarantees of that nature.
That being said, I think it's safe to say that supercomputers developed within the next 20 years will take "time" to break your hash, regardless of what it is. Even if it is in plaintext some time is required for I/O.
Thus, the answer to your question is both yes and no :)

Algorithm that costs time to run but easy to verify?

I am designing an website for experiment, there will be a button which user must click and hold for a while, then release, then client submits AJAX event to server.
However, to prevent from autoclick bots and fast spam, I want the hold time to be very real and not skip-able, e.g. doing some calculation. The point is to waste actual CPU time, so that you can't simply guess the AJAX callback value or turning faster system clock to bypass it.
Are there any algorithm that
fast & easy to generate a challenge on a server
costs some time to execute on the client side, no spoof or shortcut the time.
easy & fast to verify the response result on a server?
You're looking for a Proof-of-work system.
The most popular algorithm seems to be Hashcash (also on Wikipedia), which is used for bitcoins, among other things. The basic idea is to ask the client program to find a hash with a certain number of leading zeroes, which is a problem they have to solve with brute force.
Basically, it works like this: the client has some sort of token. For email, this is usually the recipient's email address and today's date. So it could look like this:
bob#example.com:04102011
The client now has to find a random string to put in front of this:
asdlkfjasdlbob#example.com:04202011
such that the hash of this has a bunch of leading 0s. (My example won't work because I just made up a number.)
Then, on your side, you just have to take this random input and run a single hash on it, to check if it starts with a bunch of 0s. This is a very fast operation.
The reason that the client has to spend a fair amount of CPU time on finding the right hash is that it is a brute-force problem. The only know want to do it is to choose a random string, test it, and if it doesn't work, choose another one.
Of course, since you're not doing emails, you will probably want to use a different token of some sort rather than an email address and date. However, in your case, this is easy: you can just make up a random string server-side and pass it to the client.
One advantage of this particular algorithm is that it's very easy to adjust the difficulty: just change how many leading zeroes you want. The more zeroes you require, the longer it will take the client; however, verification still takes the same amount of time on your end.
Came back to answer my own question. This is called Verifiable Delay Function
The concept was first proposed in 2018 by Boneh et al., who proposed several candidate structures for constructing verifiable delay functions and it is an important tool to add time delay in decentralized applications. To be exact, the verifiable delay function is a function f:X→Y that takes a prescribed wall-clock time to compute, even on a parallel processor, ond outputs a unique result that can effectively output the verification. In short, even if it is evaluated on a large number of parallel processors and still requires evaluation of f in a specified number of sequential steps
https://www.mdpi.com/1424-8220/22/19/7524
The idea of VDF is a step forward than #TikhonJelvis's PoW answer because apparently it "takes a prescribed wall-clock time to compute, even on a parallel processor"

Is user delay between random takes is good improvement for PRNG?

I thought that for making random choices for example for next track in a player or next page in the browser it could be possible to use time as 'natural phenomenon', for example decent RPNG just can continuously get next random number without program request (for example in a thread every several milliseconds or event more often) and when the time comes (based on the user decision), the choice will be naturally affected by this user delay.
Is this approach is good enough and how can it be tested? The problem for testing manually is that I can not wait that long in real world to save enough random numbers to feed them to some test program. Any artificial attempt to speed this up will make the method itself invalid.
Thanks
A good random number generator really doesn't need improvement, and even if it did, it isn't clear that user input timing would help.
Could a user ever detect a pattern in tracks selected by an LCG? Whatever your platform, its likely that its built-in random() function would be good enough (that is, it would appear completely random to a user).
If you are still worried, however, use a cryptographic quality RNG, seeded with data from the dedicated source of randomness on your system. Nowadays, many of these system RNGs use truly random bits generated through quantum events in hardware. However, they can be slow to produce bits, so its best to use them as a seed for a fast, algorithmic PRNG.
Now, if you aren't convinced these approaches are good enough, you should be very skeptical that the timing of user typing is a good source. The keys that are pressed by users are highly predictable, given the limited vocabulary in use and the patterns that tend to appear within that limited set of words. This predictability in letter sequences leads to a high degree of predictability in timing between key presses.
I know that a lot of security programs use this technique during key generation. I don't think that it is pure snake oil, but it could be a placebo to placate users. A good product will depend on the system RNG.
Acquiring the time information that you describe can indeed add entropy to a PRNG. However, from your description of your intended applications, I don't think you need it. For "random choices for example for next track in a player or next page in the browser", a trivial, unmodified PRNG is fine. For security applications such as nonces, etc. it is much more important.
Anyway, you should read about PRNG entropy sources.
I wouldn't improve PRNGs with user delays, mostly because they're quite regular: you type at around the same speed, and it takes too long to measure the delay between a click and another (assuming normal usage). I'd rather use other user-triggered events: pressed keys, distance between each click, position of the mouse at given moments.

Resources