Best algorithm for transforming public keys into user directory paths? - uniqueidentifier

I am making a communication system where users are identified by their public keys (asymmetric cryptography). Essentially, each user ID is his public key.
But I need to create for them some directories where I store their messages. Some directory layout like:
/storage/USERNAME/
msg1
msg2
...
But how to obtain USERNAME? Technically, I don't want to prompt the user for it, since it is irrelevant. It just needs to be a unique valid directory name.
I am considering to use a hashing algorithm, like sha3, to simply hash their public keys, and then use the result of the hash as their user identifier. But not sure. Is it an overkill? Any better ideas?
Update: I used solution in the accepted answer here as well as the recommendation in its comments (i.e. to use sha3_244).

You can use base62 encoding. It's much faster than sha3 and collision-free in that case.
As for many files in the folder it is also better to split storage to some subfolders parts. For example use two bytes of hash to generate subdirectories. So the user with id abcdef1234 will be saved at the path /storage/ab/cdef1234/**

Related

Hash in Laravel

there. I'm preparing a Laravel test, and there's a question that I think is not correct.
When you should use a hash?
The available answers are:
When you want to compress the contents of a file.
When you want to securely store credit card information so you can use it later.
When you want to secure sen a password over email.
When you want to identify the contents of a file without storing the entire file
Since hashing is for encrypting passwords (not to send'em over email) none of this answers seem to be correct. What do you think?
Option 4. Identifying contents of a file.
Hash is a function which is supposed to return a constant length output for every input. The other property of hash functions is that for any input a it always returns the same value b. It means that if you provide file a and store its hash b then whenever you supply file a again you're going to get hash b. The last property is that for different inputs c, d and hash function f, f(c) should be different from f(d) (or the chances of outputs being equal should be near 0)
In real case scenario, you can often find hashes when downloading software and want to verify if the file you've downloaded is correct. The developer puts a hash of the executable on their site. You are downloading the file and calculate checksum (hash) just to compare it with the one from the website. If it matches, then you know it's the same (as long as the hash algorithm is not known to have collisions...).
It is quite good approach to comparing file contents, bc hashes are taking much less space than actual files.

Is there a way knowing what hash-algorithm is used?

Is there a way knowing what hash-algorithm is used?
My question is grounded of that I've got an database from a customer with some users and passwords. I have no idea what the passwords are (so it's correctly stored in the database) and the customer would not like to give these passwords away (it's understandable)
I have access to the database and I know that the passwordhash is 60 characters long, but nothing else.
I basically want to create a new user (directly in the database if possible) with a temporary password so I can login to the system - but it's kind of impossible if I don't know how to create the password. Any thoughts?
The system is created in CodeIgniter but I don't know what authentification-method is used.
What data do the passwords contain? Do they contain only 0-9 and a-f, i.e. hex
values, or can they contain other data too? If you want to know the algorithm, it is crucial to answer to this question.
If they contain hex values only, 60*4 = 240 and there is no common algorithm
which gives a hash that is 240 bits long.
It has been suggested that the password contains salt, which might explain the
unusual length.
Why not ask the customer what has algorithm is used? It is understandable that
the customer doesn't want to give away these passwords, but there should be no
objection to giving away the hash algorithm.

Hash for unordered set?

I am trying to solve a one-way indentity problem, a group of authors want to publish something without reveal their own real username, so are there algorithm/library for hashing an unordered set of usernames?
Some people would suggest, sort the set alphabetically first, then join, finally hash, but that's not ideal solution for dynamic growing array.
Additionaly questions (not compulsory for the main question):
If such algorithm exists, can we verify if a username is one of the authors by hash?
If we already know the hash of a group of usernames, then there is a new author added, can we get a new hash without knowing previous author usernames?
Are you willing to accept a small probability of false positives, that is of names that aren't authors which will be incorrectly identified as authors if anyone checks? (The probability can be made arbitrarily small.)
If you are, then a bloom filter would fit the bill perfectly.
You can always generate a hash, regardless of whether or not you know the other authors' user names. You can't guarantee that it's a unique hash, though.
If you know all the user names in advance, you can generate a minimal perfect hash, but any time you add a user name you'll have to generate a completely new hash table--with different hashes. That's obviously not a good solution.
It depends on what you want your final keys to look like.
One possibility is to assign unique sequential IDs to the user names and then obfuscate those ids so that they don't look like sequential IDs. This is similar to what YouTube does with their IDs--they turn a 64-bit number into an 11-character base64 string. I wrote a little article about that, with code in C#. Check out http://www.informit.com/guides/content.aspx?g=dotnet&seqNum=839.
And, yes, the process is reversible.
It sounds like a single hash won't do you any good. 1. You can't verify that a single username is in the hash; you would need to know all the usernames. 2. You can't add a new user to the hash without knowing something about the unhashed usernames (the order in which you add users to the hash will matter, for all good hash algorithms).
For #2, a partial solution is that you would not keep all the usernames, just keep something like an XOR of all the existing users. When you want to add a new user, XOR it with the existing one and re-hash the result. Then it won't matter which order you added the users in.
But the real solution, I think, is just to have a set of hashes, rather than a hash of a set. Is there a reason you can't do this? Then you can easily keep the set ordered or unordered as you wish, you can easily add users to the set, and easily check to see if a given author is already in the set.

SHA-1 hash for storing Files

After reading this, it sounds like a great idea to store files using the SHA-1 for the directory.
I have no idea what this means however, all I know is that SHA-1 and MD5 are hashing algorithms. If I calculate the SHA-1 hash using this ruby script, and I change the file's content (which changes the hash), how do I know where the file is stored then?
My question is then, what are the basics of implementing a SHA-1/file-storage system?
If all of the files are changing content all the time, is there a better solution for storing them, or do you just have to keep updating the hash?
I'm just thinking about how to create a generic file storing system like GoogleDocs, Flickr, Youtube, DropBox, etc., something that you could reuse in different environments (such as storing PubMed journal articles or Cramster homework assignments and tests, or just images like on Flickr). I'd probably store them on Amazon EC2. Just some system so I can say "this is how I'll 99% of the time do file storing from now on", so I can stop thinking about building a solid/consistent way to store files and get onto some real problems.
First of all, if the contents of the files are changing, filename from SHA-digest approach is not very suitable, because the name and location of the file in filesystem must change when the contents of the file changes.
Basically you first compute a SHA-1 or MD5 digest (= hash value) from the contents of the file.
When you have a digest, for example, 00e4f56c0de1c61fdb926e79e8a0a65bd12930c9, you generate a file location and filename from the digest. For example, you split the first few characters from the digest to directory structure and rest of the characters to file name. For example:
00e4f56c0de1c61fdb926e79e8a0a65bd12930c9 => some/path/00/e4/f5/6c0de1c61fdb926e79e8a0a65bd12930c9.txt
This way you only need to store the SHA-1 digest of the file to database. You can then always find out the right location and the name of the file.
Directories usually also have maximum number of files they can contain, for example maximum of 32000 subdirectories and files per directory. A directory structure based on this kind of hashing makes it unlikely that you store too many files to same directory. Also using hashing like this make sure that every directory has about the same number of files, you won't get into situation where all your files are in same directory.
The idea is not to change the file content, but rather its name (and path), by using a hash value.
Changing the content with a hash would be disastrous since a hash is normally not reversible.
I'm not sure of the motivivation for using a hash rather than the file name (or even rather than a long random number), but here are a few advantages of the hash appraoch:
the file names on the disk is uniform
the upper or lower parts of the hash value can be used to name the directories and hence distribute the files relatively uniformely
the name becomes a code, making it difficult for someone to
a) guess a file name
b) categorize pictures (would someone steal the hard drive content)
be able to retrieve the filename and location from the file contents itself (assuming the hash comes from such content. (not quite sure which use case would involve this... a bit contrieved...)
The general interest of using a hash is that unlike a file name, a hash is meaningless, and therefore one would require the database to relate images and "bibliographic" type data (name of uploader, date of upload, tags, ...)
In thinking about it, re-reading the referenced SO response, I don't really see much of an advantage of a hash, as compared to, say, a random number...
Furthermore... some hashes produce a numeric value, typically expressed in hexadecimal (as seen in the refernced SO question) and this could be seen as wasteful, by making the file names longer than they need to be, and hence putting more stress on the file system (bigger directories...)
One advantage I see with storing files using their hash is that the file data only needs to be stored once and then can be referenced multiple times within your database. This will save you space if you have a different users uploading the exact same file.
However the downside to this is when a user deletes what they think is there file from your app, you can't just physically delete the file from disk because other users that uploaded the same exact file may still be using it.
The idea is that you need to come up with a name for the photo, and you probably want to scatter the files among a number of directories. One easy way to come up with a unique name is to use the hash.
So the beginning of the hash was peeled off for a multi-level directory structure and the rest of the hash was used for a filename for the jpg.
This has the additional benefit of detecting duplicate uploads.

How to generate a unique hash for a URL?

Given these two images from twitter.
http://a3.twimg.com/profile_images/130500759/lowres_profilepic.jpg
http://a1.twimg.com/profile_images/58079916/lowres_profilepic.jpg
I want to download them to local filesystem & store them in a single directory.
How shall I overcome name conflicts ?
In the example above, I cannot store them as lowres_profilepic.jpg.
My design idea is treat the URLs as opaque strings except for the last segment.
What algorithms (implemented as f) can I use to hash the prefixes into unique strings.
f( "http://a3.twimg.com/profile_images/130500759/" ) = 6tgjsdjfjdhgf
f( "http://a1.twimg.com/profile_images/58079916/" ) = iuhd87ysdfhdk
That way, I can save the files as:-
6tgjsdjfjdhgf_lowres_profilepic.jpg
iuhd87ysdfhdk_lowres_profilepic.jpg
I don't want a cryptographic algorithm as it this needs to be a performant operation.
Irrespective of the how you do it (hashing, encoding, database lookup) I recommend that you don't try to map a huge number of URLs to files in a big flat directory.
The reason is that file lookup for most file systems involves a linear scan through the filenames in a directory. So if all N of your files are in one directory, a lookup will involve 1/2 N comparisons on average; i.e. O(N) (Note that ReiserFS organizes the names in a directory as a BTree. However, ReiserFS seems to be the exception rather than the rule.)
Instead of one big flat directory, it would be better to map the URIs to a tree of directories. Depending on the shape of the tree, lookup can be as good as O(logN). For example, if you organized the tree so that it had 3 levels of directory with at most 100 entries in each directory, you could accommodate 1 million URLs. If you designed the mapping to use 2 character filenames, each directory should easily fit into a single disk block, and a pathname lookup (assuming that the required directories are already cached) should take a few microseconds.
It seems what you really want is to have a legal filename that won't collide with others.
Any encoding of the URL will work, even base64: e.g. filename = base64(url)
A crypto hash will give you what you want - although you claim this will be a performance bottleneck, don't be sure until you've benchmarked
A very simple approach:
f( "http://a3.twimg.com/profile_images/130500759/" ) = a3_130500759.jpg
f( "http://a1.twimg.com/profile_images/58079916/" ) = a1_58079916.jpg
As the other parts of this URL are constant, you can use the subdomain, the last part of the query path as a unique filename.
Don't know what could be a problem with this solution
The nature of a hash is that it may result in collisions. How about one of these alternatives:
use a directory tree. Literally create sub directories for each component of the URL.
Generate a uniques id. The problem here is how to keep the mapping between real name and saved id. You could use a database which maps between a URL and generated unique id. You can simply insert a record into a database which generates unique ids, and then use that id as the filename.
One of the key concepts of a URL is that it is unique. Why not use it?
Every algorithm that shortens the info, can produce collisions. Maybe unlikely, but possible nevertheless
While CRC32 produces a maximum 2^32 values regardless of your input and so will not avoid conflicts, it is still a viable option for this scenario.
It is fast, so if you generate filename that conflicts, just add/change a character to your URL and simply re-calc the CRC.
4.3 billion possible checksums mean the likelihood of a filename conflict, when combined with the original filename, are going to be so low as to be be unimportant in normal situations.
I've used this approach myself for something similar and was pleased with the performance.
See Fast CRC32 in Software.
You can use UUID Class in Java to generate anything into UUID from bytes which is unique and you won't be having a problem with file lookup
String url = http://www.google.com;
String shortUrl = UUID.nameUUIDFromBytes("http://www.google.com".getBytes()).toString();
I see your question is what is the best hash algorithm for this matter. You might want to check this Best hashing algorithm in terms of hash collisions and performance for strings
The git content management system is based on SHA1 because it has very minimal chance for collision.
If it good for git it will be good to you so.
I'm playing with thumbalizr using a modified version of their caching script, and it has a few good solutions I think. The code is on github.com/mptre/thumbalizr but the short version is that is uses md5 to build the file names, and it takes the first two characters from the filename and uses it to create a folder which is named the exact same thing. This means that it is easy to break the folders up, and fast to find the corresponding folder without a database. Kind of blew my mind with it's simplicity.
It generates file names like this
http://pappmaskin.no/opensource/delicious_snapcasa/mptre-thumbalizr/cache/fc/fcc3a328e0f4c1b51bf5e13747614e7a_1280_1024_8_90_250.png
the last part, _1280_1024_8_90_250, matches the different settings that the script uses when talking to the thumbalizr api, but I guess fcc3a328e0f4c1b51bf5e13747614e7a is a straight md5 of the url, in this case for thumbalizr.com
I tried changing the config to generate images 200px wide, and that images goes in the same folder, but instead of _250.png it is called _200.png
I haven't had time to dig that much in the code, but I'm sure it could be pulled apart from the thumbalizr logic and made more generic.
You said:
I don't want a cryptographic algorithm as it this needs to be a performant operation.
Well, I understand your need for speed, but I think you need to consider drawbacks from your approach. If you just need to create hash for urls, you should stick with it and don't to write a new algorithm, where you'll need to deal with collisions, for instance.
So you could have a Dictionary<string, string> to work as a cache to your urls. So, when you get a new address, you first do a lookup in that list and, if doesn't find a match, hash it and storage for future usage.
Following this line, you could give MD5 a try:
public static void Main(string[] args)
{
foreach (string url in new string[]{
"http://a3.twimg.com/profile_images/130500759/lowres_profilepic.jpg",
"http://a1.twimg.com/profile_images/58079916/lowres_profilepic.jpg" })
{
Console.WriteLine(HashIt(url));
}
}
private static string HashIt(string url)
{
Uri path = new Uri(new Uri(url), ".");
MD5CryptoServiceProvider md5 = new MD5CryptoServiceProvider();
byte[] data = md5.ComputeHash(
Encoding.ASCII.GetBytes(path.OriginalString));
return Convert.ToBase64String(data);
}
You'll get:
rEoztCAXVyy0AP/6H7w3TQ==
0idVyXLs6sCP/XLBXwtCXA==
It appears that the numerical part of twimg.com URLs are already a unique value for each image. My research indicates that the number is sequential (i.e. the example url below is for the 433,484,366th profile image ever uploaded - which just happens to be mine). Thus, this number is unique. My solution would be to simply use the numerical part of the filename as the "hash value", with no fear of ever finding a non-unique value.
URL: http:​//a2.twimg.com/profile_images/433484366/terrorbite-industries-256.png
Filename: 433484366.terrorbite-industries-256.png
Unique ID: 433484366
I already use this system for a Python script that displays notifications for new tweets, and as part of its operation it caches profile image thumbnails to reduce unneccessary downloads.
P.S. It makes no difference what subdomain the image is downloaded from, all images are available from all subdomains.

Resources