Why bother validating a downloaded file with an OpenPGP key from the same source? - gnupg

While installing some dependencies I came across this.
What security is gained when validating a download with an OpenPGP key downloaded from the same source, besides the corruption of the file?

There are several reasons for providing the key at the same source.
Trust on First Use
This concepts expects that the download source is not compromised when you access it for the first time -- for example, during development phase from some development client. Explicitly pinning the key to the one downloaded protects you from attacks on the download source during later phases, for example for unattended builds on some build server.
Fetching an Updated Copy of the Key
A key location next to the source code is very handy if you already know the fingerprint of the primary key (ie., pinned the key as discussed above), but want to update the key with respect to subkeys, certifications, user IDs without accessing key servers. Lots of enterprise build servers have very strict firewall rules set up, accessing key servers might be out of scope (but you obviously have access to the source code for building it). Also, you often want to remove access to third-party resources as they imply another source of issues like availability, ...
Especially updating subkeys is very important: a good OpenPGP practice is to have a long-lasting primary key and exchanging subkeys in given intervals, or using different subkeys for different build servers. By pinning the primary key's fingerprint and importing a fresh copy of the key each time you validate a third-party resource, you can be sure about having an up-to-date copy of the key including subkeys.
A Starting Point to Validate the Key
OpenPGP keys can have certifications by other keys on it. After importing, there is probably no trust path from other validated and trusted OpenPGP keys yet, but having the key and its certifications imported is a starting point for finding such a path. For example, developers of an open source project might certify the project key.
Fetching the unvalidated key from the source repository is a starting point for performing such a validation.

Related

Using Yubikey to sign

I am trying to use my Yubikey to sign git commits.
I was able to create a master key with three subkeys (authentication, signing and encryption).
After I created the subkeys I moved them to my new yubikey. So on the computer I created the keys everything works fine. Meaning that I am only able to sing commits (and files) if I insert my yubikey.
However, I exported the public key of my master key and imported it on antoher machine. Now on this other machine it shows that the subkeys are stored on the computer instead of my yubikey. Furthermore, I am able to sign commits and files without having to insert my yubikey.
Does anyone of you have an idea what I did wrong?

Is there a method preventing dynamic ec2 key pairs from being written to tfstate file?

We are starting to use Terraform to build our aws ec2 infrastructure but would like to do this as securely as possible.
Ideally we would like to do is to create a key pair for each Windows ec2 instance dynamically and store the private key in Vault. This is possible, but I cannot think a way of implementing it without having the private key written to the tfstate file. Yes I know I can store the tfstate file in an encrypted s3 bucket but this does not seem like an optimal secure solution.
I am happy to write custom code if needs be to have the key pair generated via another mechanism and the name passed as a variable to Terraform but dont want to if there are other more robust and tested methods out there. I was hoping we could use Vault to do this but on researching it does not look possible.
Has anyone got any experience of doing something similar? Failing that, any suggestions?
The most secure option is to have an arbitrary keypair you destroy the private key for and user_data that joins the instances to a AWS Managed Microsoft AD domain controller. After that you can use conventional AD users, and groups to control access to the instances (but not group policy in any depth, regrettably). You'll need a domain member server to administrate AD at that level of detail.
If you really need to be able to use local admin on these Windows EC2 instances, then you'll need to create the keypair for decrypting the password once manually and then share it securely through a secret or password manager with other admins using something like Vault or 1Password.
I don't see any security advantage to creating a keypair per instance, just considerable added complexity. If you're concerned about exposure, change the administrator passwords after obtaining them and store those in your secret or password manager.
I still advise going with AD if you are using Windows. Windows with AD enables world-leading unified endpoint management and Microsoft has held the lead on that for a very long time.

Fastlane match with multiple apps

I have developer account with multiple apps. I am using fastlane match to generate certs and profile. Now using match it creates new certs. Check below code how I generate it.
lane :GenerateCerts do
match(app_identifier: "dev", type: "development")
match(app_identifier: "stage", type: "development")
match(app_identifier: "stage", type: "appstore")
end
I already have crossed the limit on developer account to generate new iOS Distribution certs so I am not able to generate a new one. But I guess that certificate on dev portal can be used for generating profiles.
How can I use the certificate already in the portal to generate profiles?
Also, I need to manually set the profiles in Xcode for different configurations. Which command could be helpful to configure certificates in Xcode generated by match, cert, sigh?
What is the best practice for following case when I have single developer account for multiple apps?
Creating different git repo for different apps for fastlane match
Single repo for all apps.
For now I am using first one. If you have any better suggestions please help.
How can I use the certificate already there in portal to generate profiles?
This use case is not supported by match. Match only supports syncing profiles it created. If you want to work around this, you can manually create an identical, encrypted git repo and it will work from there. There are instructions for modifying one on the advanced documentation page
Instead, you could review the source code for match, which uses cert and sigh under the hood, and create a custom action for your specific use case.
But honestly it's easier to just destroy the existing certs and make new ones with match.
Also, I need to manually set the profiles in Xcode for different configurations. Which command could be helpful to configure certificates in Xcode generated by match, cert, sigh?
To clarify:
cert will get (or create, if necessary) a code signing certificate
sigh will get (or create, if necessary) a provisioning profile signed with a code signing certificate
match calls the above commands and syncs their outputs via an encrypted git repo
So if you want to configure certificates, use cert.
What is the best practice for following case when I have single developer account for multiple apps?
There's not really a best practice here that I know of. You have a few options, each with their own tradeoffs:
Use one repo per app. This benefits from complete isolation by project which can be helpful for security purposes but you'll need to sync the distribution profiles by hand (using the advanced technique I linked above)
Use one repo, with one branch per app. This lets you sync the same certificates around for several apps, but has a security risk because anyone with access to this repo has more privileges than they need (unless everyone works on everything)
Use one repo for distribution credentials, with an additional per-app repo for development credentials.
The second options will require use of the match_branch option which can be passed in your Fastfile, or (my preference) specified in your Matchfile to make your Fastfile cleaner. For final option, you could make use of the for_lane command to override an option when called from a particular lane. For example, your Matchfile might look like:
git_url "git#github.com:my_org/my_repo_name.git"
type 'development'
readonly true
for_lane :deploy_to_app_store do
type 'appstore'
git_url "git#github.com:my_org/my_distribution_cert_repo.git"
end

Best practice for distributing chef validation key

I am using Enteprise Chef. There is just one validation key per organisation. That means that once I download it for my workstation, other devops people in team can't have it => they can't bootstrap nodes from workstation. If I automate process of bootstraping, for instance if I put the key on S3, then I also will have to think about keeping my workstation validation key and S3 key in sync (and all other devops people in team).
So question is:
What are the best practices for distributing this single validation key across nodes and workstations?
My best bet on this:
Use chef on your workstations to manage the distribution of the validation.pem file.
Another way is to set it on a shared place (cifs or nfs share) for all your team.
According to this blog post this will become unneeded with chef 12.2.
For the record, the validation key is only necessary for the node to register itself and create it's own client.pem file at first run, it should (must if you're security aware) be deleted from the node after this first run.
The chef_client cookbook can take care of cleaning the validator key and will help manage nodes configuration.

How paranoid should I be about my Azure application binary files being stolen?

I need to migrate a huge application to Windows Azure. The application depends on a third-party library that requires an activation key stored in a special binary file on the role instance filesystem.
Obviously that key has to be either included into the role package or stored somewhere where role can fetch it. The activation key will not be bound to the machine (since I have no idea where exactly a role will be run in the cloud) so anyone can use it to make a copy of that library work.
Since Azure roles run somewhere not under our control I feel somewhat paranoid about the key being stolen and made widely available.
How do I evaluate how likely it is to steal a binary file that is included into Azure role? How do I mitigate such risks?
When asking questions like this you need to differentiate your attackers:
A2) Rogue internet hacker
A3) A developer in your organization
A4) A person in your organization who does deployments
Your role binaries are not accessible to A2, however they are very accessible to A3 and A4.
As mentioned in another answer, you can store your secrets in the Role Configuration File.
However, these secrets are still accessible to A4 (anyone with access to the portal).
To defend against A4 you want to encrypt the secrets in the role configuration file with a key that isn't present in the role binaries or the role configuration. Then in your role binaries you decrypt the encrypted role setting, with the decryption key.
The key to use for encrypting the secrets is a certificate, which the Azure SDK will install for you. You can find a post about setting up certificates for azure here.
In summary, for secrets where you need to defend against people in your organization who do deployments/have access to your configuration files you do the following:
Have a trusted party do the following:
Generate a Certificate/Private Key
Encrypt secrets with the Certificate, and store the encrypted settings in your configuration files
Upload the Certificate/Private Key to your service.
Then modify your service to:
Have the service model install the Certificate/PrivateKey
Have your application, load the private key for decrypting secrets
Have your application load the encrypted settings from role configuration, and decrypt them with the private key.
As far security is concerned, unless your team is extremely capable in this area, the default always upgraded Azure OS is probably much more secure than any self-configured host. At least, this is how we (aka my company, Lokad) assess the situation two years after migrating in full to Windows Azure compared to our previous self-hosted situation.
Then, if you have credentials, such as licence keys, then the most logical place to put them is the Role configuration file - not the Role binaries. You can naturally regenerate the special file within the VM as needed, but this way, you don't internally spread your credentials all over the place as you archive your binary versions.

Resources