I am trying to access an external hard drive which I filled with files and encrypted from my computer. After I filled it I formatted everything on my local hard drive so even though it is the same machine, none of the same users exist. Both the old and current OS are Windows 10.
I can see the files but I cannot open them, so it seems like the files are encrypted, rather than the whole external drive. When I click on the properties I see my old user at old domain, but I cannot add the current user. I don't have read/write permissions. This makes sense because I wouldn't want anyone to just add their user an be able to see my stuff.
The frustrating thing is that I know the password I used to encrypt it, but I can't find anywhere to enter my password so it doesn't seem to matter that I haven't forgotten it.
Can anyone please advise? Thank you.
What did you use to encrypt it? Obviously, it wasn't BitLocker, because then the file system (i.e., file names) would not be visible. Your best bet is to try to see if there's a back-door to the encryption you used, or better yet, see if there's a security hole in it. Considering it's not BitLocker, you at least have a reasonable hope of that.
However, what you want to do is precisely what encryption exists to prevent, i.e., if you could do it, it would completely defeat the purpose of encrypting the files in the first place. I mean, what if rather than you, a bad guy found your drive and wanted to do the same thing? If it were possible at all, then there would be no point in encrypting in the first place.
Finally, a third option would be to see if anyone has built a dictionary-attack brute-force password crack tool for the encryption tool you used. Since I don't know what you used to encrypt it, I don't know if such a tool exists, but if you know how the encryption works, and how the keys are generated from the passwords, theoretically one could write one themselves.
Related
I'm planning to make a MiniFilter do make some file encryption, add some meta-data on files.
I think I understand what I need to do, in my MiniFilter, to make that files are stored in their encrypted form but can be read by the system with no problems.
If an application ask a read on the file, I need to query the encrypted part, dechiper and send it back to the system.
If I try to copy the file, I need to copy the whole file, with meta-data and encrypted payload.
But I think I may have a problem with meta-data : as I cannot find a way to know if the IRP_MJ_READ i got is from an app trying to read the file or a copy-paste request, I will never be able to read the meta-data and either copy them.
Is there some informations, in the IRP_MJ_READ or the IRP_MJ_CREATE, that is specific from a copy paste action ?
Your task will not be easy or trivial by any means. Making an encryption filesystem filter in Windows is hard.
First of all I will give you a few hints and pointers. The best thing you could to is parse OSR NTFSD list for posts and threads about this. It is a gold mine when it comes to these kinds of filters.
Check out the swapbuffers sample from Microsoft. They show how you can replace data in the Read/Write I/O path with your own. In this case as you described your scenario encrypt on the Write and decrypt on the Read
For starters filter only the read/writes that have IRP_NO_CACHE flag set. Make sure all your Read/Write are volume sector size multiples in size. See more information about this flag here
Use a block cipher that aligns with the volume sector size, all the popular ones should. See CNG
Explore from there on. Modifying only this should be pretty straight forward.
Make sure you will be using a VM and snapshots as well as try to monitor a particular file only and encrypt/decrypt only that file as it will take you many tries until you succeed.
Is there some informations, in the IRP_MJ_READ or the IRP_MJ_CREATE,
that is specific from a copy paste action ?
None whatsoever. The kernel is blind to this. Even the Copy/Paste itself at the end of the day if you think about it will result in explorer.exe doing a file open, reading from a source file, and writing to the destination file using system calls. The OS is there to make sure the system calls work and do their job, it does not know nor it needs to know that the Read of the data or metadata came from you copy/pasting, right clicking Properties on explorer.exe or who knows, you might use Total Commander and do copy paste from there and this one could implement its copy totally different or use xcopy or robocopy. You need to think in a more abstract way in the kernel.
Good luck.
I have been searching everywhere for all the combinations of things that I want to accomplish hoping something would pop but I can't find anything. Additionally, I am not sure if I am "crafting" my query properly enough so I am hoping I can get some assistance on that here.
What I would like to accomplish is this (pseudo logic)
Create a single container file, for example: vdata.x which will contain everything in it as a single data file
Mount this file as an actual drive/folder in Windows so that you can write to, read from, delete/modify the content as if you were using Windows Explorer. Visible to FS, applications, system/commandline like any other "real" folder on the machine.
Prefer the ability to have this file reside on a thumbdrive and have it mounted either automatically or manually after plugged in and have it show up not as the thumbdrive but as the file inside it, or mount both doesn't matter.
Additionally the ability for that file to be locked, encrypted and accessible (despite auto mounting, if that's the case) after it have been authenticated with a password, random token or whatnot.
Finally a housekeeping element, such as being aware of its available "host" space (aka the thumbdrive) so that as it reaches a certain threshold of expansion, it says, hey move me to a larger device, make room or stop adding more, something akin to, running out of space warning.
I thought about putting this in software recommendation SE but that is not fully up and running yet (at last check) and plus the range of who access that sub-se might very limited, so I am asking here to get feedback and discussion to see if we can answer it better here or it needs to move to there.
Thank you in advance and hope to get some brilliant minds out there to help me accomplish this.
PS. I am not averse to building something like this myself but I am limited in time and health and plus if its already done, why reinvent the wheel right? But if anything could help launch the development of such a tool, I would take that input as well, thank you.
I've created a simple Mac app that gives you statistics on your working behavior over time. For example, your average words per minute, what language you are typing in, usage of the delete key, etc. Interesting stuff! However, some test users have said they wouldn't use the app if they didn't know me personally, since it collects keystrokes like a keylogger.
Is there some certification I can get to show that I'm not doing anything nefarious? (I never keep more than one word in memory!) Or will it be enough to have my app signed? Or open-source that part of the code? (Other parts I know I cannot make open source.)
Distributing through the Mac App Store will help, since users can see that Apple has tried your application and found nothing nefarious in it. [Added:] Also, sandboxing your app means that your app is restricted to an explicit set of abilities, which technically-skilled users could inspect. Anything not listed, you're unable to do, so this would be an easy way to prove that you don't send anything back over the internet.
Another thing would be to save all data in user-readable files. No binary plists, no Core Data stores, etc. (Whether the XML variants of either of those should count as user-readable would be more arguable, but for this purpose, I think at least an XML plist would be readable enough. Not sure about Core Data.)
If the user can read all of the raw data you store using applications that they trust (such as TextEdit), and not just your usual fancy in-app presentation of it, then they can check for themselves, and eventually trust, that you're not storing anything they wouldn't want you to.
If any concerned potential users email you about whether you report their keystrokes to your own server via the internet, and assuming that you don't make any internet connections at all (not even an update check), you can recommend that they should install Little Snitch, which pops up a confirmation alert anytime any app tries to connect to something. When they don't see such an alert about your app, they know that you're not phoning home.
You might also, on your product webpage, include a link to a tech profile. Here's Jesper's article proposing them, and here's one example of such a document, for one of his products.
I would think that Gatekeeper would be adequate for most users. If it turns out an app is doing bad things, then Apple could pull the plug on a malware developer. So that and maybe some time live should establish your program as 'safe' to those who are not technically inclined (e.g. cannot understand your source).
Simply distributing it in your or your company's name can do a lot to build trust in an app (provided of course your other products/programs have not violated users' trust).
If you can get the application onto Apple's App Store, then that means they will have checked it for such problems. There's no way they'd knowingly allow a key-logging app on there. Also, signing the app with an Apple certificate ensures that if it has been downloaded from the App Store and later is found to be nefarious, they can black list it.
Open-sourcing code would also be a good idea. I assume you can't Open Source all of it because it doesn't belong to you? If so, then make it clear what technologies it uses and be as open and honest about what the application does and how it goes about doing it.
Does anyone knows what exactly happens behind the scenes when Mac OS X verifies a disk image (.dmg) file? Is there any way to extend or customize the procedure?
EDIT: I would like to create a disk image that verifies that it does exactly what it should do and nothing more. For example, if I distribute some software that manages passwords, a malicious user could modify my package to send the passwords to an unwarranted third party. To the end user, the functionality would appear to be identical to my program, and they would never know the package was sabotaged. I would like to perform this verification at mount time.
To my knowledge, you cannot modify this procedure (unless you do some system hacks which I don't recommend). I believe it compares it with the internal checksum and makes sure that the disk's volume header is OK. It goes through all of the files to see if any of them are corrupted.
My understanding of dmg's is limited but as I understand it's essentially an osx specific archive format, similar to zips. One option would be to also distribute the checksum of your dmg. This isn't very useful as if an attacker can change the dmg a user downloads from your site they can also modify the checksum.
The functionality I believe you're looking for is codesigning. It's a cryptographic verification that an app hasn't been modified since it was signed by the author. There's a bit of a barrier to using this as you need a developer certificate from the apple developer program.
Apple's documentation on codesigning can be found here:
https://developer.apple.com/library/mac/documentation/Security/Conceptual/CodeSigningGuide/Procedures/Procedures.html#//apple_ref/doc/uid/TP40005929-CH4-SW5
I'm working hard on making my product work seamlessly on Windows 7. The problem is that there is a small set of global (not user-specific) application settings that all users should be able to change.
On previous versions I used HKLM\Software\__Company__\__Product__ for that purpose. This allowed Power Users and Administrators to modify the Registry Key and everything worked correctly. Now that Windows Vista and Windows 7 have this UAC feature, by default, even an Administrator cannot access the Key for writing without elevation.
A stupid solution would, of course, mean adding requireAdministrator option into the application manifest. But this is really unprofessional since the product itself is extremely far from administration-related tasks. So I need to stay with asInvoker.
Another solution could mean programmatic elevation during moments when write access to the Registry Key is required. Let alone the fact that I don't know how to implement that, it's pretty awkward also. It interferes with normal user experience so much that I would hardly consider it an option.
What I know should be relatively easy to accomplish is adding write access to the specified Registry Key during installation. I created a separate question for that. This also very similar to accessing a shared file for storing the settings.
My feeling is that there must be a way to accomplish what I need, in a way that is secure, straightforward and compatible with all OS'es. Any ideas?
Do you have to have it in the registry? If not, put it into a simple file, writable by everyone. Writing to HKLM requires additional privileges for a very good reason.
I'm new to here (otherwise i would've left a comment) and i'm not a windows guru, but...
imho the premise is wrong:
there's a reason if a non-elevated user cannot modify registry keys or directories read by all users (like Users\Public by default)
i think that allowing any users to modify a small set of global application settings may be disruptive for the experience of the other users that didn't expect their settings to be modified
on the other hand i don't know your use cases...
could you please specify why all users should be able to modify these settings?
and if indeed all users have to be able to do it... why can't you make these settings user-specific?