How to link my Google Domain with my Google Cloud DNS? - https

I'm trying to use Google Cloud to generate an http certification, following this guide: https://certbot-dns-google.readthedocs.io/en/stable/
I have my domain via Google Domains, and I also use GSuite. I made a Google Cloud account (free), and I'm not sure how to tell it about my domain.
What's the best way forward, so I can get the cert for my domain? Thanks!

You don't necessarily need to use Google Cloud in order to create a certificate from "Let's Encrypt" but here's how I would do it.
Assuming the Google Domains domain name is certme.com, I would create a GCP Cloud DNS public zone using certme.com link, then you must create a GCP service account link (this will be used by your certbot later) remember to use the credential types described on your guide and download it's credentials to the cerbot running machine.
After that then you should be able to run the following (described on your guide):
certbot certonly \
--dns-google \
--dns-google-credentials /path/to/credentials.json \
--dns-google-propagation-seconds 120 \
-d certme.com
Remember that under the hood, you would be using cerbot challenge "dns-01", for this to work a DNS TXT record must be added and validated (by cerbot).

Related

Why can't we export a *public* certificate from AWS Certificate Manager?

The docs for AWS Certificate Manager (ACM) are very clear that we cannot export a public cert -- especially its private key.
Is there a security reason for that? What's so bad in doing that?
Because SSL certificates aren't cheap and AWS supplies the certificates for free only to use with other AWS services. If AWS allows this, you can use anywhere and what will be the point of enabling clients to create free certificates? I can agree with you in one point, maybe AWS can allow exporting certificates and charge client as if client bought the certificate. Other than that, its disallowance isd mostly business related I think.

error setting up LDAPS with AWS Managed AD - unable to download

I am trying to setup LDAPS with AWS Managed AD but am receiving an "unable to download" error when opening PKIVIEW. See screenshots below.
I granted Public Access to the bucket and folders but the URL would take me to S3 bucket properties tab for the bucket if logged in otherwise would take to me to an AWS login prompt.
I have reached step number 10 under "Step 4b: Configure Enterprise Subordinate CA" on the document listed on the AWS site in trying to setup LDAPS using AWS Managed AD. See link below.
https://aws.amazon.com/blogs/security/how-to-enable-ldaps-for-your-aws-microsoft-ad-directory/
This is the last action before Step 5.
For the record, I have set up exactly per instructions in this document. Both the RootCA and SubordinateCA have joined the domain and are in the same security group and subnet.
Any help would be greatly appreciated.
Thanks.
PS. I have also posted this question on the AWS forum
I managed to resolve this issue with a combination of two things
removed/reinstalled the cert services (so started from step 3 in the doc again) and this time around did not join the rootca to the domain - I misread this the first time around
changed the S3 URL paths to align with how they are noted in the doc (because there are a couple of difft ways in pathing the S3 URL). I then tested that I could browse and download each of the files using the S3 URL without logging into AWS and this worked.

How to secure composer-rest-server after generating REST API?

I have configured composer-rest-server. I had also provided fabric username/password while configuring composer-rest-server (WebAppAdmin or admin). Now, I can able to access REST API without providing any credentials (through postman or loopback).
I would like to understand how we can secure composer-rest-server. Though, I have understood that we can add participant and issue identity, but not able to connect logical dots in context of how everything will work.
How to secure composer-rest-server while accessing REST API?
When and How we are going to use "username/secret" registered against any participant?
When to authenticate composer-rest-server API and When to use participant identity to access business network?
Please see the documentation on this subject:
https://hyperledger.github.io/composer/integrating/enabling-rest-authentication.html

Custom domain which heroku forwarded to in not secure node.js

What steps do I need to take to move my normal node.js application into a state where it is secure on my custom domain? When I visit my heroku application example.herokuapp.com, the connection is secure across https://.
When I forward that heroku domain to my own site however www.example.com, it shows a warning that the connection is not secure.
Are there any articles online that have answered this question? I cannot seem to find any information on what steps to take. Thanks all
The steps for setting up custom domain SSL with your Heroku app are as follows:
1- Add your SSL add-on:
$ heroku addons:add ssl
2- Add the certificate to your app
Using the certificate you generated in the previous step, upload it to Heroku:
$ heroku certs:add server.crt server.key
3- Configure DNS
Add a CNAME record in the DNS configuration that points from the domain name that will host secure traffic e.g. www.yourdomain.com to the SSL endpoint hostname, e.g. example.herokussl.com. Consult your DNS provider for instructions on how to do this. The target should be the fully qualified domain name for the SSL endpoint associated with the domain.
You will find further information in Heroku Dev Center:
https://devcenter.heroku.com/articles/ssl-endpoint
Assuming you have the hobby or professional account, run the following command to get the automated certificate management (ACM) to work:
heroku certs:auto:enable -a <app name>
https://devcenter.heroku.com/articles/automated-certificate-management
Use Expedited CDN add-on and you can force for https for free.
First you need to be in at least hobby plan.
Need to add automated Automated Certificate Management (ACM) and your custom domain/s.
You can add Expedited CDN from resources tab of your project and its free.
Then visit Expedited CDN and configure DNS as mentioned there its easy and hassle free just follow the steps, trust me it will work.
I have provided some screenshots only for reference.
It has lot of additional features you might be looking.

How to create new client certificates / tokens for programmatic access to the Kubernetes API hosted on GKE?

I am running a Kubernetes cluster hosted on GKE and would like to write an application (written in Go) that speaks to the Kubernetes API. My understanding is that I can either provide a client certificate, bearer token, or HTTP Basic Authentication in order to authenticate with the apiserver. I have already found the right spot to inject any of these into the Golang client library.
Unfortunately, the examples I ran across tend to reference to existing credentials stored in my personal kubeconfig file. This seems non-advisable from a security perspective and makes me believe that I should create a new client certificate / token / username-password pair in order to support easy revocation/removal of compromised accounts. However, I could not find a spot in the documentation actually describing how to go about this when running on managed Kubernetes in GKE. (There's this guide on creating new certificates explaining that the apiserver needs to get restarted with updated parameters eventually, something that to my understanding cannot be done in GKE.)
Are my security concerns for reusing my personal Kubernetes credentials in one (or potentially multiple) applications unjustified? If not, what's the right approach to generate a new set of credentials?
Thanks.
If your application is running inside the cluster, you can use Kubernetes Service Accounts to authenticate to the API server.
If this is outside of the cluster, things aren't as easy, and I suppose your concerns are justified. Right now, GKE does not allow additional custom identities beyond the one generated for your personal kubeconfig file.
Instead of using your credentials, you could grab a service account's token (inside a pod, read from /var/run/secrets/kubernetes.io/serviceaccount/token), and use that instead. It's a gross hack, and not a great general solution, but it might be slightly preferable to using your own personal credentials.

Resources