I’m trying to debug a Bonjour network routine, and every time I run it, the Mac’s firewall asks “Do you want the application ProjectName to accept incoming network connections?”
I click “allow,” give it the administrator name and password, and the app is duly added to the firewall’s list of allowed incoming-connections apps…until the next run.
Debugging this sync routine is cumbersome as it is. It’s really a nuisance having to type in the admin and password for every run. Of course I could get around this by running the Mac as admin, but I’d rather not compromise the security that way.
Does Xcode have some project setting that will calm the firewall?
You should code sign your app. The firewall is much more lenient toward apps that are signed.
To do that, you need to go into your Project Settings and in the Code Signing section, you should add one of your provisioning profiles as the Code Signing Identity.
There's a pretty good description of the process here.
Related
Microsoft SmartScreen, well-known for its message:
Windows Defender SmartScreen prevented an unrecognized app from starting
is useful for end users to avoid malware, but can also harm indie developers because when they distribute binaries: the end users see frightening messages, and that is a problem for the developer's reputation (see someone's comment "My customers often think that I am purveying a virus, malware or something illegitimate and they tell their friends and I lose sales"):
Smart-Screen filter still complains, despite I signed the executable, why?
Even with a paid certificate, if software-release1.0.1.exe is finally whitelisted, when you release software-release1.0.2.exe update, the messages will come again:
Transferring Microsoft SmartScreen reputation to renewed certificate
The only solution seems to be Extended "EV code signing" which can be 300-500$ per year (this fixed fee makes the tax % higher for small indie developers).
Question: is there a way to get a .exe whitelisted immediately (or a few days) for all users - and not only on my own computer - by submitting it to Microsoft for analysis?
I have seen this link: https://www.microsoft.com/en-us/wdsi/filesubmission, has someone been able to use it successfully to avoid further SmartScreen alerts? (it seems that no).
Are there other methods? Such as automatically deploying 100 VMs via an automated script, and let each VM download and install the .exe automatically? But this would probably be from the same IP, then Microsoft will probably increase the reputation counter by +1 instead of +100?
As you said in your question, the first solution for having trusted software is code signing with EV certificate But, another tricky solution is increasing reputation of your software. As Microsoft said here :
Reputation-based URL and app protection
If a URL, a file, an app, or a certificate has an established
reputation, users won't see any warnings. If, however, there's no
reputation, the item is marked as a higher risk and presents a warning
to the user.
So in the last paragraph of your question, you mentioned about creating mass docker containers or virtual machines for increasing trust and reputation. I complete it with a solution for same IP address in each VM or container.
The solution is using TOR as a proxy in all of your VM's or containers.
With using tor you can create proxy which is connected inside TOR network and hide your real IP address in your virtual machines or containers. Tor is free for use and you can connect your nodes to it's network as many as you want and change your IP address frequently. Also it is better to have different version of windows in some of your VM's. Remember before that you must submit your software for malware analysis,
I have a Go API I am trying to test and I've installed reflex. It gets up and running just fine, but every time I save my project it creates a new instance of the application. It will prompt my system to ask for permissions:
Do you want the application “go-api” to accept incoming network connections?
Clicking Deny may limit the application’s behavior.
This setting can be changed in the Firewall pane of Security & Privacy preferences.
Would really appreciate any help or even some guidance on how to troubleshoot as I haven't seen anything about this bug yet.
When I check my Mac's privacy and security settings, I can see that the firewall allows an instance of go-api along with many other instances of go-api.
When I reveal these applications in my finder I can see that Go is instantiating separate build files for each instance of my program and creating a Unix executable file to serve as the application.
On my coworker's devices that have installed Go and reflex for the same API, this behavior is not present. I do not think it is related to the reflex config or the API because theirs are exactly the same as mine is, but not exhibiting the same behavior.
I am thinking that this may be related to my .bash_profile or something:
# Setting PATH for Go
export GOPATH="$HOME/go"
PATH="$GOPATH/bin:$PATH"
export GOOGLE_APPLICATION_CREDENTIALS=/Users/me/Documents/path/to/go-api
export GO_ENV=dev
UPDATE: I found a solution, albeit an unsatisfying one. According to Apple:
If you run an unsigned app that is not listed in the firewall list, a dialog appears with options to Allow or Deny connections for the app. If you choose Allow, OS X signs the application and automatically adds it to the firewall list. If you choose Deny, OS X adds it to the list but denies incoming connections intended for this app.
-- https://support.apple.com/en-us/HT201642
So, I turned off Firewall because I haven't found a way to single out my application and allow it while still keeping the firewall up.
I’m going to try to be as thorough as I can, but if you have questions or would like additional tests. I will provide more detail as I can. I have a small number of computers exhibiting intermittent issues when waking from sleep.
Some details:
Bound to Active Directory (although the bind is likely broken when the issue occurs)
OSX - 10.12.3
Machine is Encrypted
Symptoms:
When a user sleeps their machine which enables a locked screen saver, and then attempts to wake the machine, they are unable to log in using their credentials.
If they click on "Switch User" they are then able to log into their account, however, they are not recognized as an admin and can not run sudo commands or unlock system preferences.
It seems, at least with the computer I was able to get hands on with, that they can not authenticate in terminal or system prefs UNLESS they change their network connection to reflect the connection that allowed them to log in. So if they switch user, then connect to wifi, they can not authenticate in sysprefs, but if they turn off wifi, then they are able to authenticate.
When clicking "Switch User" the wi-fi appears to drop, and thus, lets them log in.
Restarting resolves the issue for some users but not others (unverified, going off user input, the machine I restarted did resolve the issue, at least temporarily.)
Generally when I see this issue, the computer seems to have become unbound from Active Directory. Re-binding it appears to resolve the issue temporarily (until AD drops the keychain item again).
The issue was present prior to upgrading to OSX 10.12.
It seems to me like the computer knows to check with AD if the internet is available, but if AD is unreachable or the credentials are not accepted, then it does not know to default to the local cache, unless the internet is turned off completely. I'm not sure what file or files may be involved in that, but I would like to change that file to default to the local cache when internet is connected but AD is unreachable as well as when no internet is present.
This is an issue with the opendirectoryd daemon which bugs when trying to bind with AD.
The raw solution is basically to kill the daemon which will restart and rebind somehow.
There are many ways to automate the kill, a cronjob would work but will require to have the killall command run every minute, which is very dirty.
I am using sleepwatcher (available with homebrew) and set it to launch the kill command everytime the laptop is going out of sleep, which works like a charm.
It's a workaround, but seems Apple doesn't really work on a fix for that issue which is ongoing for years.
There are many sites that explain how to run signtool.exe on a .pfx certificate file, which boil down to:
signtool.exe sign /f mycert.pfx /p mypassword /t http://timestamp.server.com \
/d "My description" file1.exe file2.exe
I have a continuous integration CI process setup (using TeamCity) which like most CI processes, does everything: checks out source, compiles, signs all .exes, packages into an installer, and signs the installer .exe. There are currently 3 build agents, running identical VMs, and any of them can run this process.
Insecure implementation
To accomplish this today, I do a couple Bad Things(TM) as far as security is concerned: the .pfx file is in source control, and the password for it is in the build script (also in source control). This means that any developers with access to source code repository can take the pfx file and do whatever nefarious things they'd like with. (We're a relatively small dev shop and trust everyone with access, but clearly this still isn't good).
The ultimate secure implementation
All I can find about doing this "correctly", is that you:
Keep the pfx and password on some secure medium (like an encrypted USB drive with finger-based unlock), and probably not together
Designate only a couple of people to have access to sign files
Only sign final builds on a non-connected, dedicated machine that's kept in a locked-up vault until you need to bring it out for this code-signing ceremony.
While I can see merit in the security of this.. it is a very heavy process, and expensive in terms of time (running through this process, securely keeping backups of certificates, ensuring the code-signing machine is in a working state, etc).
I'm sure some people skip steps and just manually sign files with the certificate stored on their personal system, but that's still not great.
It also isn't compatible with signing files that are then used within the installer (which is also built by the build server) -- and this is important when you have an installed .exe that has a UAC prompt to get admin access.
Middle ground?
I am far more concerned with not presenting a scary "untrusted application" UAC prompt to users than proving it is my company. At the same time, storing the private key AND password in the source code repository that every developer (plus QA and high-tier tech support) have access to is clearly not a good security practice.
What I'd like is for the CI server to still sign during the build process like it does today, but without the password (or private key portion of the certificate) to be accessible to everyone with access to the source code repository.
Is there a way to keep the password out of the build or secure somehow? Should I be telling signtool to use a certificate store (and how do I do that, with 3 build agents and the build running as a non-interactive user account)? Something else?
I ended up doing a very similar approach to what #GiulioVlan suggested, but with a few changes.
MSBuild Task
I created a new MSBuild task that executes signtool.exe. This task serves a couple main purposes:
It hides the password from ever being displayed
It can retry against the timestamp server(s) upon failures
It makes it easy to call
Source: https://gist.github.com/gregmac/4cfacea5aaf702365724
This specifically takes all output and runs it through a sanitizer function, replacing the password with all *'s.
I'm not aware of a way to censor regular MSBuild commands, so if you pass the password on commandline directly to signtool.exe using it will display the password -- hence the need for this task (aside from other benefits).
Password in registry
I debated about a few ways to store the password "out-of-band", and ended up settling on using the registry. It's easy to access from MSBuild, it's fairly easy to manage manually, and if users don't have RDP and remote registry access to the machine, it's actually reasonably secure (can anyone say otherwise?). Presumably there are ways to secure it using fancy GPO stuff as well, but that's beyond the length I care to go.
This can be easily read by msbuild:
$(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\1 Company Dev#CodeSigningCertPassword)
And is easy to manage via regedit:
Why not elsewhere?
In the build script: it's visible by anyone with source code
Encrypted/obfuscated/hidden in source control: if someone gets a copy of the source, they can still figure this out
Environment variables: In the Teamcity web UI, there is a detail page for each build agent that actually displays all environment variables and their values. Access to this page can be restricted but it means some other functionality is also restricted
A file on the build server: Possible, but seems a bit more likely it's inadvertently made accessible via file sharing or something
Calling From MSBuild
In the tag:
<Import Project="signtool.msbuild.tasks"/>
(You could also put this in a common file with other tasks, or even embed directly)
Then, in whichever target you want to use for signing:
<SignTool SignFiles="file1.exe;file2.exe"
PfxFile="cert.pfx"
PfxPassword="$(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\1 Company Dev#CodeSigningCertPassword)"
TimestampServer="http://timestamp.comodoca.com/authenticode;http://timestamp.verisign.com/scripts/timstamp.dll" />
So far this works well.
One common technique is to leave keys and certificates in Version Control, but protect them with a password or passphrase. The password is saved in environment variables local to the machine, which can be easily accessed from scripts (e.g. %PASSWORD_FOR_CERTIFICATES%).
One must be careful not to log these values in plain text.
In my company, we have a stupid firewall. It block all itunes.apple.com sites and to publish an app, we have to use our internet. To publish a app, we have to go in this screen in Organizer and press "Share"
Was asked me: what things you have to unblock to publish an app? With my limited knowledge I use the Wireshark and discovery that the program first access the site:
http://contentdelivery.itunes.apple.com:443
But and after, there are a special port, or a magic thing that the firewall can block? Or it have to unblock only site with itunes.apple.com for http and https? I really don't know find it :(
I'd try something like Little Snitch to find out, it will pop up a box every time a program tries to access something on the internet - giving you the address and port number.
I find it pretty useful for this and testing connectivity issues in debugging.