Does anyone knows what exactly happens behind the scenes when Mac OS X verifies a disk image (.dmg) file? Is there any way to extend or customize the procedure?
EDIT: I would like to create a disk image that verifies that it does exactly what it should do and nothing more. For example, if I distribute some software that manages passwords, a malicious user could modify my package to send the passwords to an unwarranted third party. To the end user, the functionality would appear to be identical to my program, and they would never know the package was sabotaged. I would like to perform this verification at mount time.
To my knowledge, you cannot modify this procedure (unless you do some system hacks which I don't recommend). I believe it compares it with the internal checksum and makes sure that the disk's volume header is OK. It goes through all of the files to see if any of them are corrupted.
My understanding of dmg's is limited but as I understand it's essentially an osx specific archive format, similar to zips. One option would be to also distribute the checksum of your dmg. This isn't very useful as if an attacker can change the dmg a user downloads from your site they can also modify the checksum.
The functionality I believe you're looking for is codesigning. It's a cryptographic verification that an app hasn't been modified since it was signed by the author. There's a bit of a barrier to using this as you need a developer certificate from the apple developer program.
Apple's documentation on codesigning can be found here:
https://developer.apple.com/library/mac/documentation/Security/Conceptual/CodeSigningGuide/Procedures/Procedures.html#//apple_ref/doc/uid/TP40005929-CH4-SW5
Related
I've created a simple Mac app that gives you statistics on your working behavior over time. For example, your average words per minute, what language you are typing in, usage of the delete key, etc. Interesting stuff! However, some test users have said they wouldn't use the app if they didn't know me personally, since it collects keystrokes like a keylogger.
Is there some certification I can get to show that I'm not doing anything nefarious? (I never keep more than one word in memory!) Or will it be enough to have my app signed? Or open-source that part of the code? (Other parts I know I cannot make open source.)
Distributing through the Mac App Store will help, since users can see that Apple has tried your application and found nothing nefarious in it. [Added:] Also, sandboxing your app means that your app is restricted to an explicit set of abilities, which technically-skilled users could inspect. Anything not listed, you're unable to do, so this would be an easy way to prove that you don't send anything back over the internet.
Another thing would be to save all data in user-readable files. No binary plists, no Core Data stores, etc. (Whether the XML variants of either of those should count as user-readable would be more arguable, but for this purpose, I think at least an XML plist would be readable enough. Not sure about Core Data.)
If the user can read all of the raw data you store using applications that they trust (such as TextEdit), and not just your usual fancy in-app presentation of it, then they can check for themselves, and eventually trust, that you're not storing anything they wouldn't want you to.
If any concerned potential users email you about whether you report their keystrokes to your own server via the internet, and assuming that you don't make any internet connections at all (not even an update check), you can recommend that they should install Little Snitch, which pops up a confirmation alert anytime any app tries to connect to something. When they don't see such an alert about your app, they know that you're not phoning home.
You might also, on your product webpage, include a link to a tech profile. Here's Jesper's article proposing them, and here's one example of such a document, for one of his products.
I would think that Gatekeeper would be adequate for most users. If it turns out an app is doing bad things, then Apple could pull the plug on a malware developer. So that and maybe some time live should establish your program as 'safe' to those who are not technically inclined (e.g. cannot understand your source).
Simply distributing it in your or your company's name can do a lot to build trust in an app (provided of course your other products/programs have not violated users' trust).
If you can get the application onto Apple's App Store, then that means they will have checked it for such problems. There's no way they'd knowingly allow a key-logging app on there. Also, signing the app with an Apple certificate ensures that if it has been downloaded from the App Store and later is found to be nefarious, they can black list it.
Open-sourcing code would also be a good idea. I assume you can't Open Source all of it because it doesn't belong to you? If so, then make it clear what technologies it uses and be as open and honest about what the application does and how it goes about doing it.
I'm trying to get a better understanding of OSX Code Signing and the advantages that it affords me in terms of protecting my software. Could someone please clarify certain questions for me?
Given an application that is Code Signed but not sandboxed:
Should a hacker change the application's binary the application is no longer considered as signed. However will it still run correctly (with the Caveat that Lion will warn the user about the application not being code signed)?
Given an application that is Code Signed and sandboxed:
What will not happen if a hacker changes the code in this case? Can he/she simply remove the entitlements file to create an unsigned version of the application that no longer has any sandbox restrictions?
Given a signed but not sandboxed application that contains a signed and sandboxed XPC service helper is there anything I can do to guarantee that a hacker can't create a non-signed (and modified) version of either part. It seems to me that as it currently stands a hacker can do the following:
Create a binary-modified version of the helper. This new version
would thus be non-sandboxed and non-signed.
Create a binary-modified version of the main application. This new
version would thus also be non-sandboxed and non-signed, and able to
start up the new version of the helper.
Am I wrong? If so, why?
Thanks,
Tim
You're basically right. What you're looking for is copy protection, and that's something nobody's ever figured out how to do (well), and it's not something that either code signing or sandboxing attempt to do. What sandboxing does is limit the damage if your program is taken over at runtime and made to do things it's not supposed to. What code signing does is prevent someone else from passing their program off as yours.
I used the words "their program" intentionally. You have to realize that once "your program" is on someone else's computer and they start messing with it, it's not really yours anymore; it's theirs, and they can do pretty much anything they want with it. They can take parts out (sandboxing, etc) add parts (malicious code, etc), change things, ... They could even write a "completely new" program that just happens to include parts (or the entirety of) your program.
There are things you can do to make your code hard to modify/reuse, but nobody's ever figured out how to make it impossible. Apple isn't trying; their security measures are aimed at other targets.
I have a Mac application in the App Store and am looking to adopt sandboxing before it becomes a mandatory requirement. I've run into two issues and was hoping to post here for some insight into best practice in the following situations:
Within my application I use an NSOpenPanel to prompt the user to load a proprietary file format. After loading the file my application parses it and gathers a list of NSURLs to local files. These local files are then passed to NSImage's initWithContentsOfURL: method. Unfortunately, the act of loading the image files causes the sandbox to cancel the action. I understand that this happens because the user has given my application explicit permission to open the file selected by the NSOpenPanel, but not for the files referenced within my proprietary format. How can I handle this (supposedly fairly common) situation?
I have a unix executable file contained within my applications bundle that I would like to execute using an NSTask. Is this legal under sandboxing, given that the script is contained within my bundle?
If anyone could clarify the above points, that would be appreciated.
1) From my understanding the NSURL object contains the permissions necessary to re-access the files later, so if you are using hardcoded paths, you could replace them with archived NSURL objects. This is also assuming the user selected those filies within an NSOpenPanel at an earlier point.
2) You can run a NSTask but it inherits the permissions of your main app.
Hopefully others can chime in with more information. I've found the Mac Developer Boards, specifically the "Application Sandboxing" forums to be helpful, as Apple employees often drop in. So far, I've found sandboxing to be an unusable mess.
Easy one first: you can run your helper with NSTask and it will inherit the sandbox of your app.
Those URLs: not easily/reliably/at all. There is a way to save NSURLs to files you have access to in such a way that a subsequent run of your application and re-load them and regain access, however it is deemed fragile and not to be recommended. Read the Apple developer forums and this is an Apple acknowledged problem they are "working on"; given this using the fragile solution is probably not worth the effort - search the developer forums for the fragile solution if you really want to hack something that sort of works now.
I'm looking at getting a FireFox plugin developed - but is it possible to create a plugin that is for private use only, so only those I share it with have it and not open to the masses? Need this for 2 reasons; 1) while in BETA and 2) for my clients use only to start with.
Of course: just distribute the plugin install package (XPI if I am not mistaking) to the target users.
Note that won't prevent leaks, if any.
You could also be more fancy and "lock" the plugin to a set of computers: you have access to the whole machine when you design a plugin (e.g. NPAPI based). Then again, a determined hacker can always find a way.
Yes, of course this is possible.
Extensions (mentioning these because the term "plugin" is often misused to mean "extension") can be packed in a .xpi file that can be opened and installed by Firefox; see also this tutorial.
Proper plugins are a bit more work, see the Mozilla developer wiki.
While this works mostly on a psychological level, prominently displaying identifying information such as the user's name/email address or a company name/logo, may also help prevent users from distributing your work because it is obviously personalized/tailored software, and they may not want this information to be distributed along with your software.
Also, once you do distribute your extension to your target users, you can digitally sign the XPI files for each individual user (i.e. fingerprinting individual files within the XPI package), so that you can track back any leaks.
In addition, you as the author of the extension are of course free to implement a simple "talkback" mechanism so that you can track use of your extension, along with all sorts of other information that may be relevant to you (i.e. usage stats).
Similarly, XPI files are conventional ZIP files, so you can also password-protect them to make it more complicated to install them without proper instructions.
If you sign a windows (native, not .NET) application with a code signing certificate, does this somehow prevent it being subsequently infected with a virus?
Obviously if you sign an already infected file, you've got a problem...
If the application is signed, it can't be altered without invalidating the signature. So if nothing else, it's easier to identify that the application has been tampered with.
If it were an Office document, template or add-in with signed VBA modules, then (depending on the user's macro security settings), Office would pop up a dialog alerting the user before executing the macros - or refuse point blank to execute them. (It would detect that the macros did not have a valid signature, not that the file had been tampered with). I don't think that standard applications (EXEs) work like this, though.
Since it checks integrity of file, it would help. However, there is nothing preventing virus from stripping signature.
If more applications employ this as a measure viruses will just strip signature and infect it anyhow.
The question is: are signed apps less vulnerable to virus infections? Simply put, no. Viruses don't care whether the file is signed or not. Now, you can detect better when a signed file has had its content altered so detection is somewhat better as the signature would become invalid.
I don't recommend signing someone else's exe with your signature, if you're thinking of doing that. I tell our developers that "when you sign an app, you are saying 'I know what's in here'" That's not the true purpose of code signing, but putting your company's name on someone else's install seems like it creates a linkage between the two that you most likely don't want.