can I create kubernetes networks in a host project, then share them to service project using deployment manager? - google-deployment-manager

I have a host project and service project. My networks are created in host project, and compute/kubernetes resources in service project are created in these shared network.
when doing this manually, I would create the subnets in host project, associate the network to the service project, share the subnets to those 3 service accounts in service project and assign network user role to them. how much of this can I do in deployment manager?
using deployment manager, I can create subnets, but I can't get the subnets to share. here is my code
resources:
- name: mytest-kube
type: subnetwork.py
properties:
network: "/projects/xxxxxx/global/networks/xxxxx-vpc1"
region: us-east1
ipCidrRange: x.x.x.0/24
privateIpGoogleAccess: true
enableFlowLogs: true
secondaryIpRanges:
- rangeName: mytest-kube-pod
ipCidrRange: x.x.x.0/24
- rangeName: mytest-kube-service
ipCidrRange: x.x.x.0/24
accessControl:
gcpIamPolicy:
bindings:
- members:
- serviceAccount:xxxxxx-compute#developer.gserviceaccount.com
- serviceAccount:xxxxxx#cloudservices.gserviceaccount.com
- serviceAccount:service-xxxxxxx#container-engine-robot.iam.gserviceaccount.com
role: roles/compute.networkUser

I have found a document, which mentioned the step by step instuctions on how to create shared vpc. Please keep in mind, Shared VPC is an Organization-level feature. As such, it requires some Organization-level policies to be configured—the service account used by Deployment Manager needs specific roles at the Organization level.

Related

Best practices when deploying Dask Cloudprovider EC2 cluster while allowing public access to Dashboard

I have the following workflow where I start a Dask Cloudprovider EC2Cluster with the following config:
dask_cloudprovider:
region: eu-central-1
bootstrap: True
instance_type: t3.small
n_workers: 1
docker_image: daskdev/dask:2022.10.0-py3.10
security: False
debug: True
use_private_ip: False
# other properties: region, instance_type, security_group, vpc, etc. omitted
This is supposed to start both my scheduler and my workers in docker on EC2 instances in the same VPC. The AWS security group has the following inbound rules:
Port range
Source
8786 - 8787
[own group]
8787
[my ip]
So, the workers and scheduler should be able to talk among themselves, and I should also be able to access the Scheduler Bokeh Dashboard from my IP only.
The important part is that the above security group rules only allow communication between the private IPs of the instances in the same security group, and do not allow traffic between the public IPs of the instances.
This is OK, since traffic between public IPs necessarily must be routed through an internet gateway and hence incurs bandwidth costs.
The problem is these network rules do not work as-is. My scheduler starts as follows:
Creating scheduler instance
Created instance i-09fe7442a8026db71 as dask-723b83ae-scheduler
Waiting for scheduler to run at <public scheduler IP>:8786
The IP that dask-cloudprovider advertises as the IP where the workers should connect is the public scheduler IP. Worker<->Scheduler traffic through public IP is not allowed as per my security group (and I wish to keep it so).
The only way that would allow the workers/schedulers to communicate in this setup that I've found is to set use_private_ip: True, but in that case my EC2Cluster startup hangs on trying to reach the scheduler at its private EC2 IP, which I obviously can't access from my own workstation.
I've seen that the 'recommended' approach is to deploy the EC2Cluster from yet another VM on e.g. EC2 in the same VPC, but I don't understand the upside of this when I could simply develop and then start it locally. It also adds another layer where I would have to get my code onto this separate VM (and install all requirements or rebuild an image every time).
What is the best way to accomplish my goal here?

easiest way to expose an elastic load balancer to the internet

I've deployed grafana to to an AWS EKS cluster and I want to be able to access it from a web browser, if I create a Kubernetes service type of LoadBalancer, based on the very limited AWS networking knowledge I have, I know that this maps to an elastic load balancer, I can get the name of this, go to network and security -> network interfaces and get all the interfaces associated with this, one for each EC2 instance. Presuming its the public ip address associated with each ELB network interface I need to arrange access in order to access my grafana service, and again my AWS networking knowledge is very lacking, what is the fastest and easiest way for me to make the grafana Kubernetes service accessible via my web browser.
The Easiest way to expose any app running on kubernetes is to create a ServiceType as LoadBalancer.
I myself use the same for some of the services to get the things quickly up and running.
To get the loadbalancer name I do
kubectl get svc
which will give me the loadbalancer FQDN. I then map it to a DNS.
The other way which I use is to deploy the nginx-ingress-controller.
https://kubernetes.github.io/ingress-nginx/deploy/#aws
This creates a ServiceType as LoadBalancer.
I then create the Ingress which will be mapped to the ingress controller elb.
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
I use this for all my apps running with one loadbalancer mapping it to one elb using the nginx-ingress-controller.
In my specific scenario, the solution was to open up the port for the LoadBalancer service via a port.

Creating a ManagedCertificate results in "Status: FailedNotVisible"

Using Kubernetes 1.12.6-gke.7 or higher it is possible to create a ManagedCertificate which is then referenced from an Ingress Resource exposing a Service to the Internet.
Running kubectl describe managedcertificate certificate-name first indicates the certificate is in a Provisioning state but eventually goes to FailedNotVisible.
Despite using a Static IP and DNS that resolves fine to the http version of said service all ManagedCertificate's end up in a "Status: FailedNotVisible" state.
Outline of what I am doing:
Generating a reserved (static) external IP Address
Configuring DNS A record in CloudDNS to subdomain.domain.com to generated IP address from step 1.
Creating a ManagedCertificate named "subdomain-domain-certificate" with kubectl apply -f with spec:domains containing a single domain corresponding to subdomain.domain.com DNS record in step 2.
Creating a simple deployment and service exposing it
Creating Ingress resource referring to default backend of service in step 4 as well as annotations for static ip created in step 1 and managed certificate generated in step 3.
Confirm that Ingress is created and is assigned static IP
Visiting http://subdomain.domain.com serves the output from pod created in deployment in step 4
After a little while
kubectl describe managedcertificate subdomain-domain-certificate
results in "Status: FailedNotVisible".
Name: subdomain-domain-certificate
Namespace: default
Labels: <none>
Annotations: <none>
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-04-15T17:35:22Z
Generation: 1
Resource Version: 52637
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/subdomain-domain-certificate
UID: d8e5a0a4-5fa4-11e9-984e-42010a84001c
Spec:
Domains:
subdomain.domain.com
Status:
Certificate Name: mcrt-ac63730e-c271-4826-9154-c198d654f9f8
Certificate Status: Provisioning
Domain Status:
Domain: subdomain.domain.com
Status: FailedNotVisible
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Create 56m managed-certificate-controller Create SslCertificate mcrt-ac63730e-c271-4826-9154-c198d654f9f8
From what I understand if the Load Balancer is configured correctly (done under the hood in the ManagedCertificate resource) and the DNS (which resolves fine to the non https endpoint) checks out the certificate should go in to a Status: Active state?
The issue underlying my problem ended up being a DNSSEC misconfiguration. After running the DNS through https://dnssec-analyzer.verisignlabs.com/ I was able to identify and fix the issue.
DNSSEC was indeed not enabled for my domain but after configuring that, the ManagedCertificate configuration was still not going through and I had no clue what was going on. Deleting and re-applying the ManagedCertificate and Ingress manifests did not do the trick. But issuing the command gcloud beta compute ssl-certificates list showed several unused managed certificates hanging around and deleting them with cloud compute ssl-certificates delete NAME ..., and then restarting the configuration process did the trick in my case.
You need to make sure the domain name resolves to the IP address of your GKE Ingress, following the directions for "creating an Ingress with a managed certificate" exactly.
For more details, see the Google Cloud Load Balancing documentation. From https://cloud.google.com/load-balancing/docs/ssl-certificates#domain-status:
"The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer."
I just ran into this problem when I was setting up a new service and my allowance of 8 external IPs was used up.
Following the trouble shooting guide, I checked whether there was a forwarding rule for port 443 to my ingress.
There wasn't.
When I tried to set it up manually, I got an error telling me I used up my 8 magic addresses.
I deleted forwarding rules I didn't need et voila!
Now, why the forwarding rule for port 80 was successfully set up for the same ingress is beyond me.
I ran across this same error and found that I had created the managedCertificate in the wrong Kubernetes namespace. Once the managedCertificate was placed in the correct namespace everything worked.
After reading the trouble shooting guide, I still wasn't able to resolve my issue. When I checked the GCP ingress events, it showed that the ingress could not locate the SSL policy. Check if you missed something when creating the ingress.
and this is another reference useful to verify your k8s manifests to set up the managed certificate and ingress. Hope it helps someone.

tunnel or proxy from app in one kubernetes cluster (local/minikube) to a database inside a different kubernetes cluster (on Google Container Engine)

I have a large read-only elasticsearch database running in a kubernetes cluster on Google Container Engine, and am using minikube to run a local dev instance of my app.
Is there a way I can have my app connect to the cloud elasticsearch instance so that I don't have to create a local test database with a subset of the data?
The database contains sensitive information, so can't be visible outside it's own cluster or VPC.
My fall-back is to run kubectl port-forward inside the local pod:
kubectl --cluster=<gke-database-cluster-name> --token='<token from ~/.kube/config>' port-forward elasticsearch-pod 9200
but this seems suboptimal.
I'd use a ExternalName Service like
kind: Service
apiVersion: v1
metadata:
name: elastic-db
namespace: prod
spec:
type: ExternalName
externalName: your.elastic.endpoint.com
According to the docs
An ExternalName service is a special case of service that does not have selectors. It does not define any ports or endpoints. Rather, it serves as a way to return an alias to an external service residing outside the cluster.
If you need to expose the elastic database, there are two ways of exposing applications to outside the cluster:
Creating a Service of type LoadBalancer, that would load balance the traffic for all instances of your elastic database. Once the Load Balancer is created on GKE, just add the load balancer's DNS as the value for the elastic-db ExternalName created above.
Using an Ingress controller. The Ingress controller will have an IP that is reachable from outside the cluster. Use that IP as ExternalName for the elastic-db created above.

How to set Role dependencies in Windows Server 2012 Failover Cluster

I have a 2 node cluster with 5 configured roles (Generic Service). The services need to run on a single machine (one is a database, one is a server,...)
I want to configure the cluster so that if a single role fails and is moved to another machine that the other roles are moved to the that machine too.
I tried to configure this without avail. If I open the Dependencies tab in the Properties windows of a Role there is only the IP Address resource available.
Does anyone know how to configure this?
I figured it out - I created a role for every service, which wasn't OK. I should have created a single "Other Server" role, right click on it and add resources to that role.

Resources