unable to create a deployment without replication controller in kubernetes client-go - go

The issue is I cannot create a deployment spec without creating replication controller along with it.I would not like to use replication controller because my app always use only one pod and I would like to set restart policy to never to prevent any terminated container tries to restart.
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
Above is the target yaml file, which I would like to implement and deploy with kubernetes client-go, however client-go currently only provides deployment with replication controller.
// Define Deployments spec.
deploySpec := &v1beta1.Deployment{
TypeMeta: unversioned.TypeMeta{
Kind: "Deployment",
APIVersion: "extensions/v1beta1",
},
ObjectMeta: v1.ObjectMeta{
Name: "binary-search",
},
Spec: v1beta1.DeploymentSpec{
Replicas: int32p(1),
Template: v1.PodTemplateSpec{
ObjectMeta: v1.ObjectMeta{
Name: appName,
Labels: map[string]string{"app": appName},
},
Spec: v1.PodSpec{
Containers: []v1.Container{
v1.Container{
Name: "nginx-container",
Image: "nginx",
VolumeMounts: []v1.VolumeMount{
v1.VolumeMount{
MountPath: "/usr/share/nginx/html",
Name: "shared-data",
},
},
},
v1.Container{
Name: "debian-container",
Image: "debian",
VolumeMounts: []v1.VolumeMount{
v1.VolumeMount{
MountPath: "/pod-data",
Name: "shared-data",
},
},
Command: []string{
"/bin/sh",
},
Args: []string{
"-c",
"echo Hello from the debian container > /pod-data/index1.html",
},
},
},
RestartPolicy: v1.RestartPolicyAlways,
DNSPolicy: v1.DNSClusterFirst,
Volumes: []v1.Volume{
v1.Volume{
Name: "shared-data",
VolumeSource: v1.VolumeSource{
EmptyDir: &v1.EmptyDirVolumeSource{},
},
},
},
},
},
},
}
// Implement deployment update-or-create semantics.
deploy := c.Extensions().Deployments(namespace)
_, err := deploy.Update(deploySpec)
Any suggestion? Many thanks in advance!

If you don't want the service to be restarted, then you can just use the Pod directly. There is no need to use a Deployment, since these only make sense, if you want to have automatic Pod restarts and roll-outs of updates.
The code would look somehow like this (not tested):
podSpec := v1.PodSpec{
Containers: []v1.Container{
v1.Container{
Name: "nginx-container",
Image: "nginx",
VolumeMounts: []v1.VolumeMount{
v1.VolumeMount{
MountPath: "/usr/share/nginx/html",
Name: "shared-data",
},
},
},
v1.Container{
Name: "debian-container",
Image: "debian",
VolumeMounts: []v1.VolumeMount{
v1.VolumeMount{
MountPath: "/pod-data",
Name: "shared-data",
},
},
Command: []string{
"/bin/sh",
},
Args: []string{
"-c",
"echo Hello from the debian container > /pod-data/index1.html",
},
},
},
RestartPolicy: v1.RestartPolicyAlways,
DNSPolicy: v1.DNSClusterFirst,
Volumes: []v1.Volume{
v1.Volume{
Name: "shared-data",
VolumeSource: v1.VolumeSource{
EmptyDir: &v1.EmptyDirVolumeSource{},
},
},
},
}
// Implement deployment update-or-create semantics.
deploy := c.Core().PodsGetter(namespace)
_, err := deploy.Update(podSpec)

Related

Ansible iterate from a list

I'm trying to parse a json output (added below) and add it into a new JSON file to save the variables
The values that i need are metadata.name and metadata.namespace.
The following JSON is the file that i need to parse and extract the values. I get this output from the command: kubectl get pod -o json.
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2022-12-21T12:18:49Z",
"generateName": "backend-99fb66465-",
"labels": {
"app": "backend",
"pod-template-hash": "99fb66465"
},
"name": "backend-99fb66465-2lxwp",
"namespace": "testingspace",
},
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2022-12-21T12:18:49Z",
"generateName": "backend-99fb66465-",
"labels": {
"app": "backend",
"pod-template-hash": "99fb66465"
},
"name": "backend-99fb66465-2lxwp",
"namespace": "testingspace",
}
]
[...]
}
My ansible code is this one:
- name: Search for all running pods from file ./data/kubernetes/pods-status
shell: |
cat ./data/kubernetes/pods-status
register: pods
- name: Pods name
set_fact:
podnames: "{{ pods.stdout|from_json|json_query(names) }}"
podkind: "{{ pods.stdout|from_json|json_query(kind) }}"
vars:
names: 'items[*].metadata.name'
kind: 'items[*].kind'
- name: Copy pods information to local file
local_action:
module: copy
dest: "./data/kubernetes/mainpod.json"
#content: "{{ podsjson | to_json }} "
content: "{{ [{'val': item }] }}"
loop: "{{ podnames }}"
I'm expecting to have the following file:
{
"items": {
"name": "backend-99fb66465-2lxwp",
"namespace":"testingspace"
},
{
"name": "backend-99fb66465-2lxwp",
"namespace": "testingspace"
}
}
}
But so far i just have this one:
[{"val": "backup-mysqldump-27875520-d26j4"}]
Given the data in the dictionary for testing
pods:
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2022-12-21T12:18:49Z'
generateName: backend-99fb66465-
labels:
app: backend
pod-template-hash: 99fb66465
name: backend-99fb66465-2lxwp
namespace: testingspace
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2022-12-21T12:18:49Z'
generateName: backend-99fb66465-
labels:
app: backend
pod-template-hash: 99fb66465
name: backend-99fb66465-2lxwp
namespace: testingspace
Declare the query
name_space: "{{ pods|json_query(name_space_query) }}"
name_space_query: 'items[].{name: metadata.name,
namespace: metadata.namespace}'
gives
name_space:
- name: backend-99fb66465-2lxwp
namespace: testingspace
- name: backend-99fb66465-2lxwp
namespace: testingspace
Example of a complete playbook for testing
- hosts: localhost
vars:
pods:
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2022-12-21T12:18:49Z'
generateName: backend-99fb66465-
labels:
app: backend
pod-template-hash: 99fb66465
name: backend-99fb66465-2lxwp
namespace: testingspace
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2022-12-21T12:18:49Z'
generateName: backend-99fb66465-
labels:
app: backend
pod-template-hash: 99fb66465
name: backend-99fb66465-2lxwp
namespace: testingspace
name_space: "{{ pods|json_query(name_space_query) }}"
name_space_query: 'items[].{name: metadata.name,
namespace: metadata.namespace}'
tasks:
- debug:
var: name_space

Beats can’t reach Elastic Service

I've been running my ECK (Elastic Cloud on Kubernetes) cluster for a couple of weeks with no issues. However, 3 days ago filebeat stopped being able to connect to my ES service. All pods are up and running (Elastic, Beats and Kibana).
Also, shelling into filebeats pods and connecting to the Elasticsearch service works just fine:
curl -k -u "user:$PASSWORD" https://quickstart-es-http.quickstart.svc:9200
{
"name" : "aegis-es-default-4",
"cluster_name" : "quickstart",
"cluster_uuid" : "",
"version" : {
"number" : "7.14.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "",
"build_date" : "",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Yet the filebeats pod logs are producing the below error:
ERROR
[publisher_pipeline_output] pipeline/output.go:154
Failed to connect to backoff(elasticsearch(https://quickstart-es-http.quickstart.svc:9200)):
Connection marked as failed because the onConnect callback failed: could not connect to a compatible version of Elasticsearch:
503 Service Unavailable:
{
"error": {
"root_cause": [
{ "type": "master_not_discovered_exception", "reason": null }
],
"type": "master_not_discovered_exception",
"reason": null
},
"status": 503
}
I haven't made any changes so I think it's a case of authentication or SSL certificates needing updating?
My filebeats config looks like this:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: quickstart
namespace: quickstart
spec:
type: filebeat
version: 7.14.0
elasticsearchRef:
name: quickstart
config:
filebeat:
modules:
- module: gcp
audit:
enabled: true
var.project_id: project_id
var.topic: topic_name
var.subcription: sub_name
var.credentials_file: /usr/certs/credentials_file
var.keep_original_message: false
vpcflow:
enabled: true
var.project_id: project_id
var.topic: topic_name
var.subscription_name: sub_name
var.credentials_file: /usr/certs/credentials_file
firewall:
enabled: true
var.project_id: project_id
var.topic: topic_name
var.subscription_name: sub_name
var.credentials_file: /usr/certs/credentials_file
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: credentials
mountPath: /usr/certs
readOnly: true
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: credentials
secret:
defaultMode: 420
items:
secretName: elastic-service-account
And it was working just fine - haven't made any changes to this config to make it lose access.
Did a little more digging and found that there weren't enough resources to be able to assign a master node.
Got this when I tried to run GET /_cat/master and it returned the same 503 no master error. I added a new node pool and it started running normally.

Creating Kubernetes Cronjob

import (
"context"
"fmt"
infinimeshv1beta1 "github.com/infinimesh/operator/pkg/apis/infinimesh/v1beta1"
v1beta1 "k8s.io/api/batch/v1beta1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
)
func (r *ReconcilePlatform) reconcileResetRootAccount(request reconcile.Request, instance *infinimeshv1beta1.Platform) error {
log := logger.WithName("Reset Root Account Pwd")
job := &v1beta1.CronJob{
ObjectMeta: metav1.ObjectMeta{
Name: "example",
},
Spec: v1beta1.CronJobSpec{
Schedule: "* * * * *",
ConcurrencyPolicy: v1beta1.ForbidConcurrent,
JobTemplate: v1beta1.JobTemplate{
Spec: v1beta1.JobTemplateSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: "Never",
Containers: []corev1.Container{
{
Name: "cli",
Image: "busybox",
Command: []string{
"/bin/bash",
"-c",
"echo 1",
},
ImagePullPolicy: "Always",
},
},
},
},
},
},
},
}
I am getting error here
JobTemplate: v1beta1.JobTemplate{
Spec: v1beta1.JobTemplateSpec{
Template: corev1.PodTemplateSpec{
as might be I am not defining it in the right way. Please guide me the right way to create a cronjob in Go. You can also write your own way of writing cronjobs in golang as I want to automate the cronjob in the kubernetes operator as when I restart the platform, cronjob will create automatically.
A cronjob wraps a job which wraps a pod.So you need to put the PodTemplateSpec inside a JobSpec inside the CronJobSpec.
cronjob := &v1beta1.CronJob{
ObjectMeta: metav1.ObjectMeta{
Name: "demo-cronjob",
Namespace: "gitlab",
},
Spec: v1beta1.CronJobSpec{
Schedule: "* * * * *",
ConcurrencyPolicy: v1beta1.ForbidConcurrent,
JobTemplate: v1beta1.JobTemplateSpec{
Spec: batchv1.JobSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
...
},
},
},
},
},
}

Could not communicate to Elasticsearch, resetting connection and trying again. end of file reached (EOFError)

I have an ECK setup https://www.elastic.co/guide/en/cloud-on-k8s/1.0-beta/k8s-overview.html. I am trying to add fluentd so k8 logs can be sent to elasticsearch to be viewed in kibana.
However when i look at the fluentd pod i can see the following errors. It looks like its having trouble connecting to Elasticsearch or finding it?
2020-07-02 15:47:54 +0000 [warn]: #0 [out_es] Could not communicate to
Elasticsearch, resetting connection and trying again. end of file
reached (EOFError) 2020-07-02 15:47:54 +0000 [warn]: #0 [out_es]
Remaining retry: 14. Retry to communicate after 2 second(s).
2020-07-02 15:47:58 +0000 [warn]: #0 [out_es] Could not communicate to
Elasticsearch, resetting connection and trying again. end of file
reached (EOFError) 2020-07-02 15:47:58 +0000 [warn]: #0 [out_es]
Remaining retry: 13. Retry to communicate after 4 second(s).
elastic.yml
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: es-gp2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
---
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: data-es
spec:
version: 7.4.2
spec:
http:
tls:
certificate:
secretName: es-cert
nodeSets:
- name: default
count: 2
volumeClaimTemplates:
- metadata:
name: es-data
annotations:
volume.beta.kubernetes.io/storage-class: es-gp2
spec:
accessModes:
- ReadWriteOnce
storageClassName: es-gp2
resources:
requests:
storage: 10Gi
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc.realms:
native:
native1:
order: 1
---
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: data-kibana
spec:
version: 7.4.2
count: 1
elasticsearchRef:
name: data-es
fluentd.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: elastic
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: elastic
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: elastic
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "data-es-es-default.default"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Im unsure about the FLUENT_ELASTICSEARCH_HOST variable. The value i have set is data-es-es-default.default. Because i have a service called data-es-es-default and it's within the default namespace.
Ive setup fluentd and only fluentd using the following guide https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes#step-4-%E2%80%94-creating-the-fluentd-daemonset. An existing elasticsearch was already present in the kubernetes cluser which was setup using ECK elastic link above.
data-es-es-default:
Does not look like its exposed as a http service over 9200
data-es-es-http
It looks like i have second service exposed a http service over 9200, im not sure what the difference between these two are.
Curl the es service from within the pod:
curl -u elastic:mypassword https://data-es-es-http.default:9200 -k
{
"name" : "data-es-es-default-1",
"cluster_name" : "data-es",
"cluster_uuid" : "vPWB0jbBT76Aq6Tbo7ta7w",
"version" : {
"number" : "7.4.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
"build_date" : "2019-10-28T20:40:44.881551Z",
"build_snapshot" : false,
"lucene_version" : "8.2.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

How can I provide my AKS (External IP <Pending>)?

I want to deploy my microservice infrastructure as AKS at Azure. I created a node on which 3 microservices run. My API gateway should be able to be addressed with a public IP and data should be forwarded to my other two microservices.
PS /home/jan-marius> kubectl get pods
NAME READY STATUS RESTARTS AGE
apigateway-77875f89cb-qcmnf 1/1 Running 0 18h
contacts-5ccc69f74-x287p 1/1 Running 0 18h
templates-579fc4984b-srv7h 1/1 Running 0 18h
so far so good.After that I created a public IP from the Microsoft Docs and changed my Yaml file as follows.
az network public-ip create \
--resource-group myResourceGroup \
--name myAKSPublicIP \
--sku Standard \
--allocation-method static
apiVersion: apps/v1
kind: Deployment
metadata:
name: apigateway
spec:
replicas: 1
selector:
matchLabels:
app: apigateway
template:
metadata:
labels:
app: apigateway
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: apigateway
image: xxx.azurecr.io/apigateway:11
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8800
name: apigateway
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-dns-label-name: tegos-sendmessage
name: apigateway
spec:
loadBalancerIP: 20.50.10.36
type: LoadBalancer
ports:
- port: 8800
selector:
app: apigateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: contacts
spec:
replicas: 1
selector:
matchLabels:
app: contacts
template:
metadata:
labels:
app: contacts
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: contacts
image: xxx.azurecr.io/contacts:12
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8100
name: contacts
---
apiVersion: v1
kind: Service
metadata:
name: contacts
spec:
ports:
- port: 8100
selector:
app: contacts
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: templates
spec:
replicas: 1
selector:
matchLabels:
app: templates
template:
metadata:
labels:
app: templates
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: templates
image: xxx.azurecr.io/templates:13
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8200
name: templates
---
apiVersion: v1
kind: Service
metadata:
name: templates
spec:
ports:
- port: 8200
selector:
app: templates
However, if I want to call the external IP address with get service, the status is
S /home/jan-marius> kubectl get service apigateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apigateway LoadBalancer 10.0.181.113 <pending> 8800:30817/TCP 19h
PS /home/jan-marius> kubectl describe service apigateway
Name: apigateway
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/azure-dns-label-name":"tegos-sendmessage"},"nam...
service.beta.kubernetes.io/azure-dns-label-name: tegos-sendmessage
Selector: app=apigateway
Type: LoadBalancer
IP: 10.0.181.113
IP: 20.50.10.36
Port: <unset> 8800/TCP
TargetPort: 8800/TCP
NodePort: <unset> 30817/TCP
Endpoints: 10.244.0.14:8800
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 5m (x216 over 17h) service-controller Ensuring load balancer
I read on the net that this error can occur if the locations of the cluster and the external IP or the LoadBalancer types do not match. I am sure that the locations match. I can't be sure about the LoadBalancer types. The external IP SKU is set to standard. However, I have never defined the type of LoadBalancer and I don't know where it can be found. Can someone tell me what I'm doing wrong and how I can provide my web service?
[![enter image description here][1]][1]
PS /home/jan-marius> az aks show -g SendMessageResource -n SendMessageCluster
{
"aadProfile": null,
"addonProfiles": {
"httpapplicationrouting": {
"config": {
"HTTPApplicationRoutingZoneName": "e6e284534ad74c0d9c01.westeurope.aksapp.io"
},
"enabled": true,
"identity": null
},
"omsagent": {
"config": {
"loganalyticsworkspaceresourceid": "/subscriptions/a553134ba7eb-cb83-484d-a05d-44bb70125b8a/resourcegroups/defaultresourcegroup-weu/providers/microsoft.operationalinsights/workspaces/defaultworkspace-a55ba7eb-cb83-484d-a05d-44bb334170125b8a-weu"
},
"enabled": true,
"identity": null
}
},
"agentPoolProfiles": [
{
"availabilityZones": null,
"count": 1,
"enableAutoScaling": null,
"enableNodePublicIp": false,
"maxCount": null,
"maxPods": 110,
"minCount": null,
"mode": "System",
"name": "nodepool1",
"nodeLabels": {},
"nodeTaints": null,
"orchestratorVersion": "1.15.11",
"osDiskSizeGb": 100,
"osType": "Linux",
"provisioningState": "Succeeded",
"scaleSetEvictionPolicy": null,
"scaleSetPriority": null,
"spotMaxPrice": null,
"tags": null,
"type": "VirtualMachineScaleSets",
"vmSize": "Standard_DS2_v2"
}
],
"apiServerAccessProfile": null,
"autoScalerProfile": null,
"diskEncryptionSetId": null,
"dnsPrefix": "SendMessag-SendMessageResou-a55ba7",
"enablePodSecurityPolicy": null,
"enableRbac": true,
"fqdn": "sendmessag-sendmessageresou-a55ba7-14596671.hcp.westeurope.azmk8s.io",
"id": "/subscriptions/a55b3141a7eb-cb83-484d-a05d-44bb70125b8a/resourcegroups/SendMessageResource/providers/Microsoft.ContainerService/managedClusters/SendMessageCluster",
"identity": null,
"identityProfile": null,
"kubernetesVersion": "1.15.11",
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7bzXktZht3zLbHrz3Xpv3VNhtrj/XmBKOIHB0D0ZpBIrsfXcg9veBov8n3cU/F/oKIfqcL2xaoktVwZFz9AjEi7qPXdxrsVLjV2+w0kPyC3ZC5JbtLSO4CFgn0MtclC6mE3OPYczYPoFdZI3/w/AmoZ6TsT7MupkCjKtrYIIaDZ/22zuTMYMvJro7cfjKI5OSR7soybXcoFKw+3tzwO9Mv9lUQr7x0eRCUAUJN6OziEI9p36fLEnNgRG4GiJJZP5aqqsVRUDuu8PF9pO0YLMBr3b2HHgzpDwSebZ6TU//okuc30cqG/2v2LkjBDRGrs5YxiSv3+ejr/9A4XGWup4Z"
}
]
}
},
"location": "westeurope",
"maxAgentPools": 10,
"name": "SendMessageCluster",
"networkProfile": {
"dnsServiceIp": "10.0.0.10",
"dockerBridgeCidr": "172.17.0.1/16",
"loadBalancerProfile": {
"allocatedOutboundPorts": null,
"effectiveOutboundIps": [
{
"id": "/subscriptions/a55b3142a7eb-cb83-484d-a05d-44bb70125b8a/resourceGroups/MC_SendMessageResource_SendMessageCluster_westeurope/providers/Microsoft.Network/publicIPAddresses/988314172c28-d4da-431e-b7f8-5acb08e468b4",
"resourceGroup": "MC_SendMessageResource_SendMessageCluster_westeurope"
}
],
"idleTimeoutInMinutes": null,
"managedOutboundIps": {
"count": 1
},
"outboundIpPrefixes": null,
"outboundIps": null
},
"loadBalancerSku": "Standard",
"networkMode": null,
"networkPlugin": "kubenet",
"networkPolicy": null,
"outboundType": "loadBalancer",
"podCidr": "10.244.0.0/16",
"serviceCidr": "10.0.0.0/16"
},
"nodeResourceGroup": "MC_SendMessageResource_SendMessageCluster_westeurope",
"privateFqdn": null,
"provisioningState": "Succeeded",
"resourceGroup": "SendMessageResource",
"servicePrincipalProfile": {
"clientId": "9009bcd8-4933-4641-b00b-237e157d86589b"
},
"sku": {
"name": "Basic",
"tier": "Free"
},
"type": "Microsoft.ContainerService/ManagedClusters",
"windowsProfile": null
}
if your publicip is in another resource group - you need to specify the resource group for the ip:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-dns-label-name: tegos-sendmessage
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup
name: apigateway
spec:
loadBalancerIP: 20.50.10.36
type: LoadBalancer
ports:
- port: 8800
selector:
app: apigateway

Resources