Are "code-signed" windows applications less vulnerable to virus infections? - windows

If you sign a windows (native, not .NET) application with a code signing certificate, does this somehow prevent it being subsequently infected with a virus?
Obviously if you sign an already infected file, you've got a problem...

If the application is signed, it can't be altered without invalidating the signature. So if nothing else, it's easier to identify that the application has been tampered with.
If it were an Office document, template or add-in with signed VBA modules, then (depending on the user's macro security settings), Office would pop up a dialog alerting the user before executing the macros - or refuse point blank to execute them. (It would detect that the macros did not have a valid signature, not that the file had been tampered with). I don't think that standard applications (EXEs) work like this, though.

Since it checks integrity of file, it would help. However, there is nothing preventing virus from stripping signature.
If more applications employ this as a measure viruses will just strip signature and infect it anyhow.

The question is: are signed apps less vulnerable to virus infections? Simply put, no. Viruses don't care whether the file is signed or not. Now, you can detect better when a signed file has had its content altered so detection is somewhat better as the signature would become invalid.
I don't recommend signing someone else's exe with your signature, if you're thinking of doing that. I tell our developers that "when you sign an app, you are saying 'I know what's in here'" That's not the true purpose of code signing, but putting your company's name on someone else's install seems like it creates a linkage between the two that you most likely don't want.

Related

Modification of signed applications

I'm trying to get a better understanding of OSX Code Signing and the advantages that it affords me in terms of protecting my software. Could someone please clarify certain questions for me?
Given an application that is Code Signed but not sandboxed:
Should a hacker change the application's binary the application is no longer considered as signed. However will it still run correctly (with the Caveat that Lion will warn the user about the application not being code signed)?
Given an application that is Code Signed and sandboxed:
What will not happen if a hacker changes the code in this case? Can he/she simply remove the entitlements file to create an unsigned version of the application that no longer has any sandbox restrictions?
Given a signed but not sandboxed application that contains a signed and sandboxed XPC service helper is there anything I can do to guarantee that a hacker can't create a non-signed (and modified) version of either part. It seems to me that as it currently stands a hacker can do the following:
Create a binary-modified version of the helper. This new version
would thus be non-sandboxed and non-signed.
Create a binary-modified version of the main application. This new
version would thus also be non-sandboxed and non-signed, and able to
start up the new version of the helper.
Am I wrong? If so, why?
Thanks,
Tim
You're basically right. What you're looking for is copy protection, and that's something nobody's ever figured out how to do (well), and it's not something that either code signing or sandboxing attempt to do. What sandboxing does is limit the damage if your program is taken over at runtime and made to do things it's not supposed to. What code signing does is prevent someone else from passing their program off as yours.
I used the words "their program" intentionally. You have to realize that once "your program" is on someone else's computer and they start messing with it, it's not really yours anymore; it's theirs, and they can do pretty much anything they want with it. They can take parts out (sandboxing, etc) add parts (malicious code, etc), change things, ... They could even write a "completely new" program that just happens to include parts (or the entirety of) your program.
There are things you can do to make your code hard to modify/reuse, but nobody's ever figured out how to make it impossible. Apple isn't trying; their security measures are aimed at other targets.

Search for and substitute SSL certificate in memory at run-time

I'm reverse-engineering a proprietary protocol in order to create a free and open client. By running the proprietary client program and watching traffic, I've so far mapped out most of the protocol. The authentication, though, is encrypted with SSL. It's my next target.
I know a common way forward is to route all traffic through a proxy under my control, essentially performing a man in the middle attack. For that reason, I need the program in question to accept my self-signed SSL certificate. Here's where I venture into unknown territory, though: The program is written in Adobe Flash and uses the AIR runtime. I have not been successful in locating the SSL fingerprint in the program files, and even if I could do this, I don't know anything about Flash and would probably screw something up when binary-patching the program. I'm thus left with the option of altering memory at run-time. Dumping the memory of the program confirms the existence of the signing authority's name in several places.
Does anyone know of a technique to automatically locate everything that looks like an SSL certificate in memory? Does anyone have any tips in general for me?
I use Linux, so I've so far been running the program under Wine and using GDB, as well as inspecting /proc/n/mem, but Windows-specific tips are also appreciated.
Validation of server-side certificates is usually done not by comparing certificate binaries (which you could substitute) but by performing complex analysis of the presented certificate. Consequently the easiest approach is to find the place where the final verdict on certificate validity is made (it will most likely be in AIR runtime rather than in the script) and patch that place.

Mac OS X Disk Image Verification

Does anyone knows what exactly happens behind the scenes when Mac OS X verifies a disk image (.dmg) file? Is there any way to extend or customize the procedure?
EDIT: I would like to create a disk image that verifies that it does exactly what it should do and nothing more. For example, if I distribute some software that manages passwords, a malicious user could modify my package to send the passwords to an unwarranted third party. To the end user, the functionality would appear to be identical to my program, and they would never know the package was sabotaged. I would like to perform this verification at mount time.
To my knowledge, you cannot modify this procedure (unless you do some system hacks which I don't recommend). I believe it compares it with the internal checksum and makes sure that the disk's volume header is OK. It goes through all of the files to see if any of them are corrupted.
My understanding of dmg's is limited but as I understand it's essentially an osx specific archive format, similar to zips. One option would be to also distribute the checksum of your dmg. This isn't very useful as if an attacker can change the dmg a user downloads from your site they can also modify the checksum.
The functionality I believe you're looking for is codesigning. It's a cryptographic verification that an app hasn't been modified since it was signed by the author. There's a bit of a barrier to using this as you need a developer certificate from the apple developer program.
Apple's documentation on codesigning can be found here:
https://developer.apple.com/library/mac/documentation/Security/Conceptual/CodeSigningGuide/Procedures/Procedures.html#//apple_ref/doc/uid/TP40005929-CH4-SW5

How to get your site trusted by AV?

I want to make an exe file available to download from my website, but when I do a test download, Norton deletes the file without any option to keep it. Presumably most AV does the same thing, so nobody is ever going to be able to download it.
Wrapping the exe in a zip seems to make the exe completely invisible to Norton, so that is one approach, but it just adds an extra step for users to go through. And I have downloaded exe files from other sites, so it is certainly possible.
Would signing the exe help? What other factors affect my site's apparent lack of trustworthiness?
Signing your EXE is probably the best way to go... As far as "trusting" a website goes, I can't speak for Norton but I know that McAfee has user feedback mechanisms for rating sites, and that's what their products use to filter bad actors on the web.
Putting the executable in a zip file is indeed the way to go. Norton is extreme pain, but most browsers and/or anti-virus applications will at least warn the user when he attempts to download an executable. It is more important to instil confidence in your users than to make life easy for them, and if they see any kind of warning message they are likely to pull the plug.

Executing a third-party compiled program on a client's computer

I'd like to ask for your advice about improving security of executing a compiled program on a client's computer. The idea is that we send a compiled program to a client but the program has been written and compiled by a third-party. How to make sure that the program won't make any harm to a client's operating system while running? What would be the best to achieve that goal and not decrease dramatically performance of executing a program?
UPDATE:
I assume that third-party don't want to harm client's OS but it can happen that they make some mistake or their program is infected by someone else.
The program could be compiled to either bytecode or native, it depends on third-party.
There are two main options, depending on whether or not you trust the third party.
If you trust the 3rd party, then you just care that it actually came from them, and that it hasn't changed in transit. Code signing is a good solution here. If the third party signs the code, and you check the signature, then you can check nothing has changed in the middle, and prove it was them who wrote it.
If you don't trust the third party, then it is a difficult problem. The usual solution is to run code in a "sandbox", where it is allowed to perform a limited set of operations. This concept has been implemented for a number of languages - google "sandbox" and you'll find a lot about it. For Perl, see SafePerl, for Java see "Java Permissions". Variations exist for other languages too.
Depending on the language involved and what kind of permissions are required, you may be able to use the language's built in sandboxing capabilities. For example, earlier versions of .NET have a "Trust Level" that can be set to control how much access a program has when it's run (newer versions have a similar feature called Code Access Security (CAS)). Java has policy files that control the same thing.
Another method that may be helpful is to run the program using (Microsoft) Sysinternals process monitor, while scanning all operations that the program is doing.
If it's developed by a third party, then it's very difficult to know exactly what it's going to do without reviewing the code. This may be more of a contractual solution - adding penalties into the contract with the third-party and agreeing on their liability for any damages.
sign it. Google for 'digital signature' or 'code signing'
If you have the resources, use a virtual machine. That is -- usually -- a pretty good sandbox for untrusted applications.
If this happens to be a Unix system, check out what you can do with chroot.
The other thing is that don't underestimate the value of thorough testing. you can run the app (in a non production environment) and verify the following (escalating levels of paranoia!)
CPU/Disk usage is acceptable
doesn't talk to any networked hosts it shouldn't do - i.e no 'phone home capability'
Scan with your AV program of choice
you could even hook up pSpy or something to find out more about what it's doing.
additionally, if possible run the application with a low privileged user. this will offer some degree of 'sandboxing', i.e the app won't be able to interfere with other processes
..also don't overlook the value of the legal contracts with the vendor that may often give you some kind of recompense if there is a problem. of course, choosing a reputable vendor in the first place offers a level of assurance as well.
-ace

Resources