Modification of signed applications - macos

I'm trying to get a better understanding of OSX Code Signing and the advantages that it affords me in terms of protecting my software. Could someone please clarify certain questions for me?
Given an application that is Code Signed but not sandboxed:
Should a hacker change the application's binary the application is no longer considered as signed. However will it still run correctly (with the Caveat that Lion will warn the user about the application not being code signed)?
Given an application that is Code Signed and sandboxed:
What will not happen if a hacker changes the code in this case? Can he/she simply remove the entitlements file to create an unsigned version of the application that no longer has any sandbox restrictions?
Given a signed but not sandboxed application that contains a signed and sandboxed XPC service helper is there anything I can do to guarantee that a hacker can't create a non-signed (and modified) version of either part. It seems to me that as it currently stands a hacker can do the following:
Create a binary-modified version of the helper. This new version
would thus be non-sandboxed and non-signed.
Create a binary-modified version of the main application. This new
version would thus also be non-sandboxed and non-signed, and able to
start up the new version of the helper.
Am I wrong? If so, why?
Thanks,
Tim

You're basically right. What you're looking for is copy protection, and that's something nobody's ever figured out how to do (well), and it's not something that either code signing or sandboxing attempt to do. What sandboxing does is limit the damage if your program is taken over at runtime and made to do things it's not supposed to. What code signing does is prevent someone else from passing their program off as yours.
I used the words "their program" intentionally. You have to realize that once "your program" is on someone else's computer and they start messing with it, it's not really yours anymore; it's theirs, and they can do pretty much anything they want with it. They can take parts out (sandboxing, etc) add parts (malicious code, etc), change things, ... They could even write a "completely new" program that just happens to include parts (or the entirety of) your program.
There are things you can do to make your code hard to modify/reuse, but nobody's ever figured out how to make it impossible. Apple isn't trying; their security measures are aimed at other targets.

Related

More accurate identification of running applications on Mac OS

By using runningApplications of NSWorkspace, it is possible to get a list of running apps on Mac OS as NSRunningApplication objects, and from this get additional information like what application is in the foreground.
It is possible to identify the running application using their name (localizedName), but I'm sure that can be spoofed by rogue applications. Other things like bundleIdentifier seem better, but I believe that too could be spoofed.
I would imagine that pretty much all of the metadata could be spoofed for applications outside of the public app store, but for any apps gotten from the app store things like bundleIdentifier should be safe ways to identify an app, right?
If we include arbitrary apps that someone downloads from the Internet, is there any better way to identify an app as to filter out rogue apps? I realize that there may be no solution that has no drawbacks, but looking for a best-effort attempt.
As you mention, all of these things can be pretty easily spoofed. Having written a product that does exactly what you're describing professionally, the solution is relatively straightforward: fingerprint every version of every popular app into a massive database, and then fingerprint each app you discover on the machine and look them up in your database. When you discover an app you've never seen before, flag it for adding to your database.
Maintaining that database is very large and ongoing endeavor. That's where most of the value of the product is. The agent code is not that complicated. The up-to-date database is what customers pay for. It's a pretty hard space to get into.
You're correct that you can verify signatures to make sure that things downloaded from MAS or part of the OS are what they claim to be. This will get you started, but isn't nearly enough; there's just so much that doesn't come from MAS.
The other headache is that you can see what "apps" are currently running in NSWorkspace, but it's pretty messy what it means. A lot of things that you don't think of as "apps" show up in runningApplications, like MobileDeviceUpdater and nbagent. On the other hand, things like mysqld aren't. Fingerprinting from runningApplications can miss things that aren't in that list, or malicious apps could lie about their bundle path to make themselves look legitimate. You can use tools like lsof to see what files a process really has open, but it gets more and more complicated.
Best of luck; it's a deep rabbit hole with dozens of corner cases, and very little documentation.

How can I show my app is not a keylogger?

I've created a simple Mac app that gives you statistics on your working behavior over time. For example, your average words per minute, what language you are typing in, usage of the delete key, etc. Interesting stuff! However, some test users have said they wouldn't use the app if they didn't know me personally, since it collects keystrokes like a keylogger.
Is there some certification I can get to show that I'm not doing anything nefarious? (I never keep more than one word in memory!) Or will it be enough to have my app signed? Or open-source that part of the code? (Other parts I know I cannot make open source.)
Distributing through the Mac App Store will help, since users can see that Apple has tried your application and found nothing nefarious in it. [Added:] Also, sandboxing your app means that your app is restricted to an explicit set of abilities, which technically-skilled users could inspect. Anything not listed, you're unable to do, so this would be an easy way to prove that you don't send anything back over the internet.
Another thing would be to save all data in user-readable files. No binary plists, no Core Data stores, etc. (Whether the XML variants of either of those should count as user-readable would be more arguable, but for this purpose, I think at least an XML plist would be readable enough. Not sure about Core Data.)
If the user can read all of the raw data you store using applications that they trust (such as TextEdit), and not just your usual fancy in-app presentation of it, then they can check for themselves, and eventually trust, that you're not storing anything they wouldn't want you to.
If any concerned potential users email you about whether you report their keystrokes to your own server via the internet, and assuming that you don't make any internet connections at all (not even an update check), you can recommend that they should install Little Snitch, which pops up a confirmation alert anytime any app tries to connect to something. When they don't see such an alert about your app, they know that you're not phoning home.
You might also, on your product webpage, include a link to a tech profile. Here's Jesper's article proposing them, and here's one example of such a document, for one of his products.
I would think that Gatekeeper would be adequate for most users. If it turns out an app is doing bad things, then Apple could pull the plug on a malware developer. So that and maybe some time live should establish your program as 'safe' to those who are not technically inclined (e.g. cannot understand your source).
Simply distributing it in your or your company's name can do a lot to build trust in an app (provided of course your other products/programs have not violated users' trust).
If you can get the application onto Apple's App Store, then that means they will have checked it for such problems. There's no way they'd knowingly allow a key-logging app on there. Also, signing the app with an Apple certificate ensures that if it has been downloaded from the App Store and later is found to be nefarious, they can black list it.
Open-sourcing code would also be a good idea. I assume you can't Open Source all of it because it doesn't belong to you? If so, then make it clear what technologies it uses and be as open and honest about what the application does and how it goes about doing it.

Executing a third-party compiled program on a client's computer

I'd like to ask for your advice about improving security of executing a compiled program on a client's computer. The idea is that we send a compiled program to a client but the program has been written and compiled by a third-party. How to make sure that the program won't make any harm to a client's operating system while running? What would be the best to achieve that goal and not decrease dramatically performance of executing a program?
UPDATE:
I assume that third-party don't want to harm client's OS but it can happen that they make some mistake or their program is infected by someone else.
The program could be compiled to either bytecode or native, it depends on third-party.
There are two main options, depending on whether or not you trust the third party.
If you trust the 3rd party, then you just care that it actually came from them, and that it hasn't changed in transit. Code signing is a good solution here. If the third party signs the code, and you check the signature, then you can check nothing has changed in the middle, and prove it was them who wrote it.
If you don't trust the third party, then it is a difficult problem. The usual solution is to run code in a "sandbox", where it is allowed to perform a limited set of operations. This concept has been implemented for a number of languages - google "sandbox" and you'll find a lot about it. For Perl, see SafePerl, for Java see "Java Permissions". Variations exist for other languages too.
Depending on the language involved and what kind of permissions are required, you may be able to use the language's built in sandboxing capabilities. For example, earlier versions of .NET have a "Trust Level" that can be set to control how much access a program has when it's run (newer versions have a similar feature called Code Access Security (CAS)). Java has policy files that control the same thing.
Another method that may be helpful is to run the program using (Microsoft) Sysinternals process monitor, while scanning all operations that the program is doing.
If it's developed by a third party, then it's very difficult to know exactly what it's going to do without reviewing the code. This may be more of a contractual solution - adding penalties into the contract with the third-party and agreeing on their liability for any damages.
sign it. Google for 'digital signature' or 'code signing'
If you have the resources, use a virtual machine. That is -- usually -- a pretty good sandbox for untrusted applications.
If this happens to be a Unix system, check out what you can do with chroot.
The other thing is that don't underestimate the value of thorough testing. you can run the app (in a non production environment) and verify the following (escalating levels of paranoia!)
CPU/Disk usage is acceptable
doesn't talk to any networked hosts it shouldn't do - i.e no 'phone home capability'
Scan with your AV program of choice
you could even hook up pSpy or something to find out more about what it's doing.
additionally, if possible run the application with a low privileged user. this will offer some degree of 'sandboxing', i.e the app won't be able to interfere with other processes
..also don't overlook the value of the legal contracts with the vendor that may often give you some kind of recompense if there is a problem. of course, choosing a reputable vendor in the first place offers a level of assurance as well.
-ace

Call another program's functions?

So I have this program that I really like, and it doesn't support Applescript. I'd like to automate it a little bit. Now, I know that I could use applescript to tell the program to tell the menu to tell the submenu to tell the menuitem to activate or whatever, but frankly I don't like applescript very much anyway.
When I open the NIB file in IB, I can see the messages that are being sent to FirstResponder; for example, the Copy menu item sends "copy:". Is there any way for me to invoke this directly from another program?
No. It's called protected memory for a reason, you know. The other program is completely insulated from your application. There are ways to put code into other apps, but (a) it's very inadvisable (b) requires root privileges, which means the rest of your app needs to be ROCK SOLID AND IMPREGNABLE, and (c) writing such code is a black art requiring knowledge of the operating system kernel interfaces, virtual memory management, the ABI, the internals of the linker/loader, assembler programming, and the operational parameters and other specifics of the particular processor upon which your app happens to be running.
Really, AppleEvents and other such IPC mechanisms are there for a reason.
Your other alternatives (all of which are a bit hacky, to be honest, and give you the fairly significant burden of ensuring the target app is in the state you want/expect) the access the data you're looking for are:
The Accessibility APIs from the ApplicationServices framework, through which you can traverse the UI tree to grab the text from wherever you need it directly, or can activate the menu item. Access for your app has to be explicitly granted by the user, however (although this is much the same as the requirement for UI scripting).
You can use the CoreGraphics APIs (within the ApplicationServices framework again) to send keyboard events to the target application (or just to the system) directly. This would mean sending four events: Command-down, C-down, C-up, Command-up.
None of these are ideal. To be honest, your best approach would be to look at your requirements and figure out how you can best engineer around the problem by changing those requirements in some way, i.e. instead of grabbing something directly, ask the user to provide some input, etc.
You might be interested in SIMBL or in mach_inject. SIMBL is a daemon (in my fork based on mach_inject, in the original version based on injection via some ScriptingAdditions hack) which does the injection for you, so you just need to put a bundle with your code into the SIMBL directory and SIMBL will inject it for you into the target application. Or you can do so yourself via mach_inject. Or probably more convenient, mach_inject_framework which injects and runs code which just loads some framework.
I think Jim may overstate the point a bit; he's not wrong, but it seems misleading. There are lots of ways to cause a Cocoa program to execute its own code under you control (Carbon is harder). The Accessibility API is very commonly used this way (so commonly that I expect it to be repurposed eventually). Fscript can give you all kinds of access to the innards of another Cocoa program. While Input Managers may well exit the scene at some point, SIMBL is still out there today to do this kind of stuff.
Whether you like Applescript or not, Apple Events are the primary way Apple provides for inter-program control. Have you double-checked Script Editor's Open Library function to find out if the program really does have any Applescript support? You can code Apple Events entirely in Objective-C these days using Leopard's Scripting Bridge. I wrote up a tutorial if you like (it's still under-documented by Apple).
Cocoa is a reverse-engineer's dream. The same guys who host SIMBL have a nice intro to the subject. "Wolf" also writes a lot of useful information on this.
Jim's right. Many of these approaches can completely destabilize the system if done incorrectly (sometimes even if done correctly). I don't do much of this stuff on my production systems; I need them to work. But there are a lot of things you can make a Mac app do, and it's a good part of a Mac developer's training to understand how all the pieces really work.

Are "code-signed" windows applications less vulnerable to virus infections?

If you sign a windows (native, not .NET) application with a code signing certificate, does this somehow prevent it being subsequently infected with a virus?
Obviously if you sign an already infected file, you've got a problem...
If the application is signed, it can't be altered without invalidating the signature. So if nothing else, it's easier to identify that the application has been tampered with.
If it were an Office document, template or add-in with signed VBA modules, then (depending on the user's macro security settings), Office would pop up a dialog alerting the user before executing the macros - or refuse point blank to execute them. (It would detect that the macros did not have a valid signature, not that the file had been tampered with). I don't think that standard applications (EXEs) work like this, though.
Since it checks integrity of file, it would help. However, there is nothing preventing virus from stripping signature.
If more applications employ this as a measure viruses will just strip signature and infect it anyhow.
The question is: are signed apps less vulnerable to virus infections? Simply put, no. Viruses don't care whether the file is signed or not. Now, you can detect better when a signed file has had its content altered so detection is somewhat better as the signature would become invalid.
I don't recommend signing someone else's exe with your signature, if you're thinking of doing that. I tell our developers that "when you sign an app, you are saying 'I know what's in here'" That's not the true purpose of code signing, but putting your company's name on someone else's install seems like it creates a linkage between the two that you most likely don't want.

Resources