Oftentimes when you download a file, the site offering the download will list the MD5 hash for the file being downloaded.
But I've never had a problem with getting a bad download. In fact, I thought that since FTP was a TCP protocol, you couldn't get bad downloads.
Is there any data on how often bad downloads occur (i.e. when checking the MD5 hash tells you the download is bad)?
It's not so much a problem with the TCP/IP protocol accidentally swapping a bit (although that DID happen sometimes in the old days, it's not much a concern now).
MD5 is especially helpful when downloading a file from a mirror site. For example, getting an ISO for a new OS. The original site can give you the MD5, and then you can download the ISO from another company. To make sure that mirror has not tampered with the image at all, you can use the MD5.
In summary, MD5 is to validate the authenticity of the file - that may or may not mean a hardware level mishap. Usually it's something a bit more intentional and mischievous.
It is not for bad downloads mostly it is for verifying authenticity of downloadable to ensure that no one has tampered the downloadable.
Let's say that the file is outside of webserver you are connected to. The website has information about checksum, file size, file name.
When user has no assurance, that the file could be replaced with one which looks the same, but has some additional features, like malware, wrong mine-type, you can check check-sum to be sure about that.
Related
I am trying to access an external hard drive which I filled with files and encrypted from my computer. After I filled it I formatted everything on my local hard drive so even though it is the same machine, none of the same users exist. Both the old and current OS are Windows 10.
I can see the files but I cannot open them, so it seems like the files are encrypted, rather than the whole external drive. When I click on the properties I see my old user at old domain, but I cannot add the current user. I don't have read/write permissions. This makes sense because I wouldn't want anyone to just add their user an be able to see my stuff.
The frustrating thing is that I know the password I used to encrypt it, but I can't find anywhere to enter my password so it doesn't seem to matter that I haven't forgotten it.
Can anyone please advise? Thank you.
What did you use to encrypt it? Obviously, it wasn't BitLocker, because then the file system (i.e., file names) would not be visible. Your best bet is to try to see if there's a back-door to the encryption you used, or better yet, see if there's a security hole in it. Considering it's not BitLocker, you at least have a reasonable hope of that.
However, what you want to do is precisely what encryption exists to prevent, i.e., if you could do it, it would completely defeat the purpose of encrypting the files in the first place. I mean, what if rather than you, a bad guy found your drive and wanted to do the same thing? If it were possible at all, then there would be no point in encrypting in the first place.
Finally, a third option would be to see if anyone has built a dictionary-attack brute-force password crack tool for the encryption tool you used. Since I don't know what you used to encrypt it, I don't know if such a tool exists, but if you know how the encryption works, and how the keys are generated from the passwords, theoretically one could write one themselves.
What is the best way to allow visitors to a website to download files the size of tens of gigabytes? In this case it is about movie files, so they are already relatively compressed.
Here are some assumptions:
Only one visitor will want to download a file at any time and nobody other than our servers has the file at any one time.
There is a high probability that the transfer will be interrupted due to network problems or other problems, so some kind of resume should be available.
The visitor will get a download link either on the website or via email.
Should work at least for visitors running Windows or OS X. It would be a bonus if it also works with some Linux distribution.
My experience with FTP and normal HTTP is that some visitors manage to download what they believe is the entire file but it then turns out not to be working because the file is incomplete or corrupted.
Printing the md5 sum or similar on the website and asking the visitor to run md5 on the download and then compare the results is too complicated for most website visitors.
Zipping the file adds a layer of error checking and completeness checking but often adds some manual steps for the visitor.
Is there some better solution?
I will presume you have custom videos on a closed web site that are per user. If this is the case you might want to implement a custom download manager that does just what you described:
1. grabs the link via magnetic link
2. downlaods the file and checks it against MD5hash
3. supports resume
An easy way to do this is to use external tool like wget to do this. to resume with wget you use -c option, here is an article describing this in greater detail:
http://www.cyberciti.biz/tips/wget-resume-broken-download.html
After this you want to check md5sum, asuming md5 hash is in file named filename.md5. This presumes that you have md5sum and wget on your system.
wget -c urlToYourfile{file.avi, file.avi.md5}
md5sum -c *.md5
So the actual code would be just a simple wrapper with easy installer that would do this. alternatively this could be done in a bit less fancy way with .bat script it just comes down to associating the script with url type and than jsut specify url in this way:
mydownload://url/file.avi and set browser to do this ... but if you have multiple random users it is actually easier to just make a simple wraper for magnetic links and voila
Best regards!
I'm reverse-engineering a proprietary protocol in order to create a free and open client. By running the proprietary client program and watching traffic, I've so far mapped out most of the protocol. The authentication, though, is encrypted with SSL. It's my next target.
I know a common way forward is to route all traffic through a proxy under my control, essentially performing a man in the middle attack. For that reason, I need the program in question to accept my self-signed SSL certificate. Here's where I venture into unknown territory, though: The program is written in Adobe Flash and uses the AIR runtime. I have not been successful in locating the SSL fingerprint in the program files, and even if I could do this, I don't know anything about Flash and would probably screw something up when binary-patching the program. I'm thus left with the option of altering memory at run-time. Dumping the memory of the program confirms the existence of the signing authority's name in several places.
Does anyone know of a technique to automatically locate everything that looks like an SSL certificate in memory? Does anyone have any tips in general for me?
I use Linux, so I've so far been running the program under Wine and using GDB, as well as inspecting /proc/n/mem, but Windows-specific tips are also appreciated.
Validation of server-side certificates is usually done not by comparing certificate binaries (which you could substitute) but by performing complex analysis of the presented certificate. Consequently the easiest approach is to find the place where the final verdict on certificate validity is made (it will most likely be in AIR runtime rather than in the script) and patch that place.
Does anyone knows what exactly happens behind the scenes when Mac OS X verifies a disk image (.dmg) file? Is there any way to extend or customize the procedure?
EDIT: I would like to create a disk image that verifies that it does exactly what it should do and nothing more. For example, if I distribute some software that manages passwords, a malicious user could modify my package to send the passwords to an unwarranted third party. To the end user, the functionality would appear to be identical to my program, and they would never know the package was sabotaged. I would like to perform this verification at mount time.
To my knowledge, you cannot modify this procedure (unless you do some system hacks which I don't recommend). I believe it compares it with the internal checksum and makes sure that the disk's volume header is OK. It goes through all of the files to see if any of them are corrupted.
My understanding of dmg's is limited but as I understand it's essentially an osx specific archive format, similar to zips. One option would be to also distribute the checksum of your dmg. This isn't very useful as if an attacker can change the dmg a user downloads from your site they can also modify the checksum.
The functionality I believe you're looking for is codesigning. It's a cryptographic verification that an app hasn't been modified since it was signed by the author. There's a bit of a barrier to using this as you need a developer certificate from the apple developer program.
Apple's documentation on codesigning can be found here:
https://developer.apple.com/library/mac/documentation/Security/Conceptual/CodeSigningGuide/Procedures/Procedures.html#//apple_ref/doc/uid/TP40005929-CH4-SW5
I'm not entirely sure if this a SO or SF question, but I'll give it a go here.
We're offering DMGs for download and a MD5 checksum to go with each. The question is how to instruct users of how to actually checksum and compare with the given checksum. Users aren't going to be all that tech savvy.
One idea was to produce a copy-paste bash command (a string built with the current checksum) which when executed says "yes" or "no". But that involves the user pulling up the Terminal, which isn't very friendly and means that most users don't know what they're doing. 'Black magic' isn't good for security.
Another idea would be to provide a GUI app to do the verification, but that would require initial trust, which breaks the point of offering a checksum.
So how do you boot-strap this kind of thing?
I was explaining this verification concept to a "beginer level" audience. They understood the concept of ensuring the checksum generated from the download needed to actually match the software author's actual checksum.
It was pointed out that checking manually was long winded and human error prone. The simplest method was to use Finder within Terminal(cmd & f) and then simply pasting in your own generated checksum and then "finding" the author's genuine checksum thus confirming a match!