Hash in Laravel - laravel

there. I'm preparing a Laravel test, and there's a question that I think is not correct.
When you should use a hash?
The available answers are:
When you want to compress the contents of a file.
When you want to securely store credit card information so you can use it later.
When you want to secure sen a password over email.
When you want to identify the contents of a file without storing the entire file
Since hashing is for encrypting passwords (not to send'em over email) none of this answers seem to be correct. What do you think?

Option 4. Identifying contents of a file.
Hash is a function which is supposed to return a constant length output for every input. The other property of hash functions is that for any input a it always returns the same value b. It means that if you provide file a and store its hash b then whenever you supply file a again you're going to get hash b. The last property is that for different inputs c, d and hash function f, f(c) should be different from f(d) (or the chances of outputs being equal should be near 0)
In real case scenario, you can often find hashes when downloading software and want to verify if the file you've downloaded is correct. The developer puts a hash of the executable on their site. You are downloading the file and calculate checksum (hash) just to compare it with the one from the website. If it matches, then you know it's the same (as long as the hash algorithm is not known to have collisions...).
It is quite good approach to comparing file contents, bc hashes are taking much less space than actual files.

Related

Ruby miscalculation of MD5 for file

I am calculating a MD5 sum for a file to compare it with values supplied in a text file. I use the following line to create the checksum:
cksum = File.open(File.join(File.dirname(path), file),'rb') do |f|
MD5.hexdigest(f.read)
end
Every once in a while I get one that does not match but running the md5 manually at the system level shows the file has the correct MD5.
Does anyone see any issue with the process I am using to calculate the MD5 value or have any idea why they sometimes do not match when calculated by this ruby method?
For followers, there's also a method:
for a file: Digest::MD5.file('filename').hexdigest
At this point MD5 is a well-exercised message digest with an extensive suite of test vectors. It is extremely unlikely that there is an issue with Ruby's implementation of it.
It is almost certainly another explanation, such as that perhaps when your checksum executes, the file has not yet been fully written (i.e. by another process). In troubleshooting, it may be helpful to note the length of the result from f.read and verify that against the file size. You could even save the read contents to a separate file for later comparison when you discover a discrepancy. That could offer a clue.
You're correctly opening the file with binary mode, so that is good.

most efficient way to validate us zip code in Flex

I have a Flex application that needs to be able to validate hundreds of zip codes fairly quickly. I also want to keep the memory space used by the app as small as possible.
Here are a few solutions my team has come up with. Any thoughts on them? Any other ideas?
Check each zip code via...
array of valid zip codes
array of invalid zip codes
soap call to a web service that validates zip codes
query a database table
a tree - 5 nodes high, nodes at the bottom would have boolean values of whether or not the zip is valid. The zip code of 12345 would go from the root to it's first child, to it's second... you get the point
validate first 3 numbers via array of valid USPS SCFs then the last two digits via array specific to that SCF.
Depends on what you are looking for. Do you want to validate the format of the zip code (ie. that it is 5 digits long) or do you want to ensure the zip code is a valid US zip code. I will venture a guess that it is the latter. Take a look at the USPS address API (https://www.usps.com/business/webtools-address-information.htm?). I am willing to bet that will be perfect and less overhead then managing a DB or updating an array and managing all the xxxxx+4 zip codes.

Hash for unordered set?

I am trying to solve a one-way indentity problem, a group of authors want to publish something without reveal their own real username, so are there algorithm/library for hashing an unordered set of usernames?
Some people would suggest, sort the set alphabetically first, then join, finally hash, but that's not ideal solution for dynamic growing array.
Additionaly questions (not compulsory for the main question):
If such algorithm exists, can we verify if a username is one of the authors by hash?
If we already know the hash of a group of usernames, then there is a new author added, can we get a new hash without knowing previous author usernames?
Are you willing to accept a small probability of false positives, that is of names that aren't authors which will be incorrectly identified as authors if anyone checks? (The probability can be made arbitrarily small.)
If you are, then a bloom filter would fit the bill perfectly.
You can always generate a hash, regardless of whether or not you know the other authors' user names. You can't guarantee that it's a unique hash, though.
If you know all the user names in advance, you can generate a minimal perfect hash, but any time you add a user name you'll have to generate a completely new hash table--with different hashes. That's obviously not a good solution.
It depends on what you want your final keys to look like.
One possibility is to assign unique sequential IDs to the user names and then obfuscate those ids so that they don't look like sequential IDs. This is similar to what YouTube does with their IDs--they turn a 64-bit number into an 11-character base64 string. I wrote a little article about that, with code in C#. Check out http://www.informit.com/guides/content.aspx?g=dotnet&seqNum=839.
And, yes, the process is reversible.
It sounds like a single hash won't do you any good. 1. You can't verify that a single username is in the hash; you would need to know all the usernames. 2. You can't add a new user to the hash without knowing something about the unhashed usernames (the order in which you add users to the hash will matter, for all good hash algorithms).
For #2, a partial solution is that you would not keep all the usernames, just keep something like an XOR of all the existing users. When you want to add a new user, XOR it with the existing one and re-hash the result. Then it won't matter which order you added the users in.
But the real solution, I think, is just to have a set of hashes, rather than a hash of a set. Is there a reason you can't do this? Then you can easily keep the set ordered or unordered as you wish, you can easily add users to the set, and easily check to see if a given author is already in the set.

SHA-1 hash for storing Files

After reading this, it sounds like a great idea to store files using the SHA-1 for the directory.
I have no idea what this means however, all I know is that SHA-1 and MD5 are hashing algorithms. If I calculate the SHA-1 hash using this ruby script, and I change the file's content (which changes the hash), how do I know where the file is stored then?
My question is then, what are the basics of implementing a SHA-1/file-storage system?
If all of the files are changing content all the time, is there a better solution for storing them, or do you just have to keep updating the hash?
I'm just thinking about how to create a generic file storing system like GoogleDocs, Flickr, Youtube, DropBox, etc., something that you could reuse in different environments (such as storing PubMed journal articles or Cramster homework assignments and tests, or just images like on Flickr). I'd probably store them on Amazon EC2. Just some system so I can say "this is how I'll 99% of the time do file storing from now on", so I can stop thinking about building a solid/consistent way to store files and get onto some real problems.
First of all, if the contents of the files are changing, filename from SHA-digest approach is not very suitable, because the name and location of the file in filesystem must change when the contents of the file changes.
Basically you first compute a SHA-1 or MD5 digest (= hash value) from the contents of the file.
When you have a digest, for example, 00e4f56c0de1c61fdb926e79e8a0a65bd12930c9, you generate a file location and filename from the digest. For example, you split the first few characters from the digest to directory structure and rest of the characters to file name. For example:
00e4f56c0de1c61fdb926e79e8a0a65bd12930c9 => some/path/00/e4/f5/6c0de1c61fdb926e79e8a0a65bd12930c9.txt
This way you only need to store the SHA-1 digest of the file to database. You can then always find out the right location and the name of the file.
Directories usually also have maximum number of files they can contain, for example maximum of 32000 subdirectories and files per directory. A directory structure based on this kind of hashing makes it unlikely that you store too many files to same directory. Also using hashing like this make sure that every directory has about the same number of files, you won't get into situation where all your files are in same directory.
The idea is not to change the file content, but rather its name (and path), by using a hash value.
Changing the content with a hash would be disastrous since a hash is normally not reversible.
I'm not sure of the motivivation for using a hash rather than the file name (or even rather than a long random number), but here are a few advantages of the hash appraoch:
the file names on the disk is uniform
the upper or lower parts of the hash value can be used to name the directories and hence distribute the files relatively uniformely
the name becomes a code, making it difficult for someone to
a) guess a file name
b) categorize pictures (would someone steal the hard drive content)
be able to retrieve the filename and location from the file contents itself (assuming the hash comes from such content. (not quite sure which use case would involve this... a bit contrieved...)
The general interest of using a hash is that unlike a file name, a hash is meaningless, and therefore one would require the database to relate images and "bibliographic" type data (name of uploader, date of upload, tags, ...)
In thinking about it, re-reading the referenced SO response, I don't really see much of an advantage of a hash, as compared to, say, a random number...
Furthermore... some hashes produce a numeric value, typically expressed in hexadecimal (as seen in the refernced SO question) and this could be seen as wasteful, by making the file names longer than they need to be, and hence putting more stress on the file system (bigger directories...)
One advantage I see with storing files using their hash is that the file data only needs to be stored once and then can be referenced multiple times within your database. This will save you space if you have a different users uploading the exact same file.
However the downside to this is when a user deletes what they think is there file from your app, you can't just physically delete the file from disk because other users that uploaded the same exact file may still be using it.
The idea is that you need to come up with a name for the photo, and you probably want to scatter the files among a number of directories. One easy way to come up with a unique name is to use the hash.
So the beginning of the hash was peeled off for a multi-level directory structure and the rest of the hash was used for a filename for the jpg.
This has the additional benefit of detecting duplicate uploads.

How to generate a unique hash for a URL?

Given these two images from twitter.
http://a3.twimg.com/profile_images/130500759/lowres_profilepic.jpg
http://a1.twimg.com/profile_images/58079916/lowres_profilepic.jpg
I want to download them to local filesystem & store them in a single directory.
How shall I overcome name conflicts ?
In the example above, I cannot store them as lowres_profilepic.jpg.
My design idea is treat the URLs as opaque strings except for the last segment.
What algorithms (implemented as f) can I use to hash the prefixes into unique strings.
f( "http://a3.twimg.com/profile_images/130500759/" ) = 6tgjsdjfjdhgf
f( "http://a1.twimg.com/profile_images/58079916/" ) = iuhd87ysdfhdk
That way, I can save the files as:-
6tgjsdjfjdhgf_lowres_profilepic.jpg
iuhd87ysdfhdk_lowres_profilepic.jpg
I don't want a cryptographic algorithm as it this needs to be a performant operation.
Irrespective of the how you do it (hashing, encoding, database lookup) I recommend that you don't try to map a huge number of URLs to files in a big flat directory.
The reason is that file lookup for most file systems involves a linear scan through the filenames in a directory. So if all N of your files are in one directory, a lookup will involve 1/2 N comparisons on average; i.e. O(N) (Note that ReiserFS organizes the names in a directory as a BTree. However, ReiserFS seems to be the exception rather than the rule.)
Instead of one big flat directory, it would be better to map the URIs to a tree of directories. Depending on the shape of the tree, lookup can be as good as O(logN). For example, if you organized the tree so that it had 3 levels of directory with at most 100 entries in each directory, you could accommodate 1 million URLs. If you designed the mapping to use 2 character filenames, each directory should easily fit into a single disk block, and a pathname lookup (assuming that the required directories are already cached) should take a few microseconds.
It seems what you really want is to have a legal filename that won't collide with others.
Any encoding of the URL will work, even base64: e.g. filename = base64(url)
A crypto hash will give you what you want - although you claim this will be a performance bottleneck, don't be sure until you've benchmarked
A very simple approach:
f( "http://a3.twimg.com/profile_images/130500759/" ) = a3_130500759.jpg
f( "http://a1.twimg.com/profile_images/58079916/" ) = a1_58079916.jpg
As the other parts of this URL are constant, you can use the subdomain, the last part of the query path as a unique filename.
Don't know what could be a problem with this solution
The nature of a hash is that it may result in collisions. How about one of these alternatives:
use a directory tree. Literally create sub directories for each component of the URL.
Generate a uniques id. The problem here is how to keep the mapping between real name and saved id. You could use a database which maps between a URL and generated unique id. You can simply insert a record into a database which generates unique ids, and then use that id as the filename.
One of the key concepts of a URL is that it is unique. Why not use it?
Every algorithm that shortens the info, can produce collisions. Maybe unlikely, but possible nevertheless
While CRC32 produces a maximum 2^32 values regardless of your input and so will not avoid conflicts, it is still a viable option for this scenario.
It is fast, so if you generate filename that conflicts, just add/change a character to your URL and simply re-calc the CRC.
4.3 billion possible checksums mean the likelihood of a filename conflict, when combined with the original filename, are going to be so low as to be be unimportant in normal situations.
I've used this approach myself for something similar and was pleased with the performance.
See Fast CRC32 in Software.
You can use UUID Class in Java to generate anything into UUID from bytes which is unique and you won't be having a problem with file lookup
String url = http://www.google.com;
String shortUrl = UUID.nameUUIDFromBytes("http://www.google.com".getBytes()).toString();
I see your question is what is the best hash algorithm for this matter. You might want to check this Best hashing algorithm in terms of hash collisions and performance for strings
The git content management system is based on SHA1 because it has very minimal chance for collision.
If it good for git it will be good to you so.
I'm playing with thumbalizr using a modified version of their caching script, and it has a few good solutions I think. The code is on github.com/mptre/thumbalizr but the short version is that is uses md5 to build the file names, and it takes the first two characters from the filename and uses it to create a folder which is named the exact same thing. This means that it is easy to break the folders up, and fast to find the corresponding folder without a database. Kind of blew my mind with it's simplicity.
It generates file names like this
http://pappmaskin.no/opensource/delicious_snapcasa/mptre-thumbalizr/cache/fc/fcc3a328e0f4c1b51bf5e13747614e7a_1280_1024_8_90_250.png
the last part, _1280_1024_8_90_250, matches the different settings that the script uses when talking to the thumbalizr api, but I guess fcc3a328e0f4c1b51bf5e13747614e7a is a straight md5 of the url, in this case for thumbalizr.com
I tried changing the config to generate images 200px wide, and that images goes in the same folder, but instead of _250.png it is called _200.png
I haven't had time to dig that much in the code, but I'm sure it could be pulled apart from the thumbalizr logic and made more generic.
You said:
I don't want a cryptographic algorithm as it this needs to be a performant operation.
Well, I understand your need for speed, but I think you need to consider drawbacks from your approach. If you just need to create hash for urls, you should stick with it and don't to write a new algorithm, where you'll need to deal with collisions, for instance.
So you could have a Dictionary<string, string> to work as a cache to your urls. So, when you get a new address, you first do a lookup in that list and, if doesn't find a match, hash it and storage for future usage.
Following this line, you could give MD5 a try:
public static void Main(string[] args)
{
foreach (string url in new string[]{
"http://a3.twimg.com/profile_images/130500759/lowres_profilepic.jpg",
"http://a1.twimg.com/profile_images/58079916/lowres_profilepic.jpg" })
{
Console.WriteLine(HashIt(url));
}
}
private static string HashIt(string url)
{
Uri path = new Uri(new Uri(url), ".");
MD5CryptoServiceProvider md5 = new MD5CryptoServiceProvider();
byte[] data = md5.ComputeHash(
Encoding.ASCII.GetBytes(path.OriginalString));
return Convert.ToBase64String(data);
}
You'll get:
rEoztCAXVyy0AP/6H7w3TQ==
0idVyXLs6sCP/XLBXwtCXA==
It appears that the numerical part of twimg.com URLs are already a unique value for each image. My research indicates that the number is sequential (i.e. the example url below is for the 433,484,366th profile image ever uploaded - which just happens to be mine). Thus, this number is unique. My solution would be to simply use the numerical part of the filename as the "hash value", with no fear of ever finding a non-unique value.
URL: http:​//a2.twimg.com/profile_images/433484366/terrorbite-industries-256.png
Filename: 433484366.terrorbite-industries-256.png
Unique ID: 433484366
I already use this system for a Python script that displays notifications for new tweets, and as part of its operation it caches profile image thumbnails to reduce unneccessary downloads.
P.S. It makes no difference what subdomain the image is downloaded from, all images are available from all subdomains.

Resources