I'm trying to parse a json output (added below) and add it into a new JSON file to save the variables
The values that i need are metadata.name and metadata.namespace.
The following JSON is the file that i need to parse and extract the values. I get this output from the command: kubectl get pod -o json.
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2022-12-21T12:18:49Z",
"generateName": "backend-99fb66465-",
"labels": {
"app": "backend",
"pod-template-hash": "99fb66465"
},
"name": "backend-99fb66465-2lxwp",
"namespace": "testingspace",
},
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2022-12-21T12:18:49Z",
"generateName": "backend-99fb66465-",
"labels": {
"app": "backend",
"pod-template-hash": "99fb66465"
},
"name": "backend-99fb66465-2lxwp",
"namespace": "testingspace",
}
]
[...]
}
My ansible code is this one:
- name: Search for all running pods from file ./data/kubernetes/pods-status
shell: |
cat ./data/kubernetes/pods-status
register: pods
- name: Pods name
set_fact:
podnames: "{{ pods.stdout|from_json|json_query(names) }}"
podkind: "{{ pods.stdout|from_json|json_query(kind) }}"
vars:
names: 'items[*].metadata.name'
kind: 'items[*].kind'
- name: Copy pods information to local file
local_action:
module: copy
dest: "./data/kubernetes/mainpod.json"
#content: "{{ podsjson | to_json }} "
content: "{{ [{'val': item }] }}"
loop: "{{ podnames }}"
I'm expecting to have the following file:
{
"items": {
"name": "backend-99fb66465-2lxwp",
"namespace":"testingspace"
},
{
"name": "backend-99fb66465-2lxwp",
"namespace": "testingspace"
}
}
}
But so far i just have this one:
[{"val": "backup-mysqldump-27875520-d26j4"}]
Given the data in the dictionary for testing
pods:
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2022-12-21T12:18:49Z'
generateName: backend-99fb66465-
labels:
app: backend
pod-template-hash: 99fb66465
name: backend-99fb66465-2lxwp
namespace: testingspace
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2022-12-21T12:18:49Z'
generateName: backend-99fb66465-
labels:
app: backend
pod-template-hash: 99fb66465
name: backend-99fb66465-2lxwp
namespace: testingspace
Declare the query
name_space: "{{ pods|json_query(name_space_query) }}"
name_space_query: 'items[].{name: metadata.name,
namespace: metadata.namespace}'
gives
name_space:
- name: backend-99fb66465-2lxwp
namespace: testingspace
- name: backend-99fb66465-2lxwp
namespace: testingspace
Example of a complete playbook for testing
- hosts: localhost
vars:
pods:
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2022-12-21T12:18:49Z'
generateName: backend-99fb66465-
labels:
app: backend
pod-template-hash: 99fb66465
name: backend-99fb66465-2lxwp
namespace: testingspace
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2022-12-21T12:18:49Z'
generateName: backend-99fb66465-
labels:
app: backend
pod-template-hash: 99fb66465
name: backend-99fb66465-2lxwp
namespace: testingspace
name_space: "{{ pods|json_query(name_space_query) }}"
name_space_query: 'items[].{name: metadata.name,
namespace: metadata.namespace}'
tasks:
- debug:
var: name_space
Related
Just started working with Ansible and JSON dictionary data. I am trying to process the following dictionary using a jmesquery:
{
"Entities": [
{
"MetaData": {
"NumDiscoveredWorkloads": 20,
"NumMigratedWorkloads": 1,
"UUID": "uuid1234567890"
},
"Spec": {
"Type": "VMWARE_ESXI_VCENTER",
"UUID": "uuid1234567890"
}
},
{
"MetaData": {
"NumDiscoveredWorkloads": 40,
"UUID": "uuid1234567891"
},
"Spec": {
"Type": "AOS_PC",
"UUID": "uuid1234567891"
}
}
],
"MetaData": {
"Count": 2
}
}
I want to find the value for the UUID of the ESXi item. I created the following Ansible code:
- name: "Get ESXiUUID"
var:
jmesquery: "Entities[*].Spec[?Type==`VMWARE_ESXI_VCENTER`].UUID"
set_fact:
move_providers_filtered: "{{ move_providers_details_content | json_query(jmesquery) }}"
But this does not work. I get strange errors from Ansible. Been checking the web for a day now, but nothing I tried solves the error. Can anybody tell me what I am doing wrong and how to solve this? Thanks for any support
Given the data
providers:
Entities:
- MetaData:
NumDiscoveredWorkloads: 20
NumMigratedWorkloads: 1
UUID: uuid1234567890
Spec:
Type: VMWARE_ESXI_VCENTER
UUID: uuid1234567890
- MetaData:
NumDiscoveredWorkloads: 40
UUID: uuid1234567891
Spec:
Type: AOS_PC
UUID: uuid1234567891
The query below
provider: "{{ providers.Entities|json_query(jmesquery) }}"
jmesquery: "[].Spec | [?Type==`VMWARE_ESXI_VCENTER`].UUID"
gives
provider:
- uuid1234567890
Example of a complete playbook for testing
- hosts: localhost
vars:
providers:
Entities:
- MetaData:
NumDiscoveredWorkloads: 20
NumMigratedWorkloads: 1
UUID: uuid1234567890
Spec:
Type: VMWARE_ESXI_VCENTER
UUID: uuid1234567890
- MetaData:
NumDiscoveredWorkloads: 40
UUID: uuid1234567891
Spec:
Type: AOS_PC
UUID: uuid1234567891
provider: "{{ providers.Entities|json_query(jmesquery) }}"
jmesquery: "[].Spec | [?Type==`VMWARE_ESXI_VCENTER`].UUID"
tasks:
- debug:
var: provider
I do not get an error if I correct your obvious typo (var: -> vars:), but you did not include the "strange errors" that you received so I don't know if that was the source.
I would not use JMESPath for this at all; most Ansible data manipulation can be done with pure Jinja, instead of introducing yet another thing to learn.
- hosts: localhost
vars:
move_providers_details_content:
Entities:
- Spec:
Type: VMWARE_ESXI_VCENTER
UUID: uuid1234567890
- Spec:
Type: AOS_PC
UUID: uuid1234567891
move_providers_filtered: "{{ (move_providers_details_content.Entities | selectattr('Spec.Type', '==', 'VMWARE_ESXI_VCENTER') | first).Spec.UUID }}"
tasks:
- name: Get ESXi UUID
debug:
msg: "{{ move_providers_filtered }}"
This uses selectattr() to filter on the attribute that we want to match, then just accesses the attributes as normal.
TASK [Get ESXi UUID] ***********************************************************
ok: [localhost] => {
"msg": "uuid1234567890"
}
I'm trying to deploy and ELK stack on AKS that will take messages from RabbitMQ and ultimately end up in Kibana. To do this I'm using the Elastic operator via
kubectl apply -f https://download.elastic.co/downloads/eck/1.3.0/all-in-one.yaml
Everything is working except the connection between Logstash and Elasticsearch. I can log in to Kibana, I can get the default Elasticsearch message in the browser, all the logs look fine so I think the issue lies in the logstash configuration. My configuration is at the end of the question, you can see I'm using secrets to get the various passwords and access the public certificates to make the https work.
Most confusingly, I can bash into the running logstash pod and with the exact same certificate run
curl --cacert /etc/logstash/certificates/tls.crt -u elastic:<redacted-password> https://rabt-db-es-http:9200
This gives me the response:
{
"name" : "rabt-db-es-default-0",
"cluster_name" : "rabt-db",
"cluster_uuid" : "9YoWLsnMTwq5Yor1ak2JGw",
"version" : {
"number" : "7.10.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
"build_date" : "2020-11-09T21:30:33.964949Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
To me, this verifies that I the pod can communicate with the database, and has the correct user, password and certificates in place to do it securely. Why then does this fail using a logstash conf file?
The error from the logstash end is
[WARN ] 2021-01-14 15:24:38.360 [Ruby-0-Thread-6: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.6.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://rabt-db-es-http:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'https://rabt-db-es-http:9200/'"}
From Elasticsearch I can see the failed requests as
{"type": "server", "timestamp": "2021-01-14T15:36:13,725Z", "level": "WARN", "component": "o.e.x.s.t.n.SecurityNetty4HttpServerTransport", "cluster.name": "rabt-db", "node.name": "rabt-db-es-default-0", "message": "http client did not trust this server's certificate, closing connection Netty4HttpChannel{localAddress=/10.244.0.30:9200, remoteAddress=/10.244.0.15:37766}", "cluster.uuid": "9YoWLsnMTwq5Yor1ak2JGw", "node.id": "9w3fXZBZQGeBMeFYGqYUXw" }
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
labels:
app.kubernetes.io/name: rabt
app.kubernetes.io/component: logstash
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-pipeline
labels:
app.kubernetes.io/name: rabt
app.kubernetes.io/component: logstash
data:
logstash.conf: |
input {
rabbitmq {
host => "rabt-mq"
port => 5672
durable => true
queue => "rabt-rainfall-queue"
exchange => "rabt-rainfall-exchange"
exchange_type => "direct"
heartbeat => 30
durable => true
user => "${RMQ_USERNAME}"
password => "${RMQ_PASSWORD}"
}
file {
path => "/usr/share/logstash/config/intensity.csv"
start_position => "beginning"
codec => plain {
charset => "ISO-8859-1"
}
type => "intensity"
}
}
filter {
csv {
separator => ","
columns => ["Duration", "Intensity"]
}
}
output {
if [type] == "rainfall" {
elasticsearch {
hosts => [ "${ES_HOSTS}" ]
ssl => true
cacert => "/etc/logstash/certificates/tls.crt"
index => "rabt-rainfall-%{+YYYY.MM.dd}"
}
}
else if[type] == "intensity"{
elasticsearch {
hosts => [ "${ES_HOSTS}" ]
ssl => true
cacert => "/etc/logstash/certificates/tls.crt"
index => "intensity-%{+YYYY.MM.dd}"
}
}
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rainfall-intensity-threshold
labels:
app.kubernetes.io/name: rabt
app.kubernetes.io/component: logstash
data:
intensity.csv: |
Duration,Intensity
0.1,7.18941593
0.2,6.34611898
0.3,5.89945352
0.4,5.60173824
0.5,5.38119846
0.6,5.20746530
0.7,5.06495933
0.8,4.94467113
0.9,4.84094288
1,4.75000000
2,4.19283923
3,3.89773029
4,3.70103175
5,3.55532256
6,3.44053820
7,3.34638544
8,3.26691182
9,3.19837924
10,3.13829388
20,2.77018141
30,2.57520486
40,2.44524743
50,2.34897832
60,2.27314105
70,2.21093494
80,2.15842723
90,2.11314821
100,2.07345020
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabt-logstash
labels:
app.kubernetes.io/name: rabt
app.kubernetes.io/component: logstash
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: rabt
app.kubernetes.io/component: logstash
template:
metadata:
labels:
app.kubernetes.io/name: rabt
app.kubernetes.io/component: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.9.2
ports:
- name: "tcp-beats"
containerPort: 5044
env:
- name: ES_HOSTS
value: "https://rabt-db-es-http:9200"
- name: ES_USER
value: "elastic"
- name: ES_PASSWORD
valueFrom:
secretKeyRef:
name: rabt-db-es-elastic-user
key: elastic
- name: RMQ_USERNAME
valueFrom:
secretKeyRef:
name: rabt-mq-default-user
key: username
- name: RMQ_PASSWORD
valueFrom:
secretKeyRef:
name: rabt-mq-default-user
key: password
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: pipeline-volume
mountPath: /usr/share/logstash/pipeline
- name: ca-certs
mountPath: /etc/logstash/certificates
readOnly: true
volumes:
- name: config-volume
projected:
sources:
- configMap:
name: logstash-config
- configMap:
name: rainfall-intensity-threshold
- name: pipeline-volume
configMap:
name: logstash-pipeline
- name: ca-certs
secret:
secretName: rabt-db-es-http-certs-public
---
apiVersion: v1
kind: Service
metadata:
name: rabt-logstash
labels:
app.kubernetes.io/name: rabt
app.kubernetes.io/component: logstash
spec:
ports:
- name: "tcp-beats"
port: 5044
targetPort: 5044
selector:
app.kubernetes.io/name: rabt
app.kubernetes.io/component: logstash
You're missing the user/password in the Logstash output configuration:
elasticsearch {
hosts => [ "${ES_HOSTS}" ]
ssl => true
cacert => "/etc/logstash/certificates/tls.crt"
index => "rabt-rainfall-%{+YYYY.MM.dd}"
user => "${ES_USER}"
password => "${ES_PASSWORD}"
}
I have an ECK setup https://www.elastic.co/guide/en/cloud-on-k8s/1.0-beta/k8s-overview.html. I am trying to add fluentd so k8 logs can be sent to elasticsearch to be viewed in kibana.
However when i look at the fluentd pod i can see the following errors. It looks like its having trouble connecting to Elasticsearch or finding it?
2020-07-02 15:47:54 +0000 [warn]: #0 [out_es] Could not communicate to
Elasticsearch, resetting connection and trying again. end of file
reached (EOFError) 2020-07-02 15:47:54 +0000 [warn]: #0 [out_es]
Remaining retry: 14. Retry to communicate after 2 second(s).
2020-07-02 15:47:58 +0000 [warn]: #0 [out_es] Could not communicate to
Elasticsearch, resetting connection and trying again. end of file
reached (EOFError) 2020-07-02 15:47:58 +0000 [warn]: #0 [out_es]
Remaining retry: 13. Retry to communicate after 4 second(s).
elastic.yml
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: es-gp2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
---
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: data-es
spec:
version: 7.4.2
spec:
http:
tls:
certificate:
secretName: es-cert
nodeSets:
- name: default
count: 2
volumeClaimTemplates:
- metadata:
name: es-data
annotations:
volume.beta.kubernetes.io/storage-class: es-gp2
spec:
accessModes:
- ReadWriteOnce
storageClassName: es-gp2
resources:
requests:
storage: 10Gi
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc.realms:
native:
native1:
order: 1
---
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: data-kibana
spec:
version: 7.4.2
count: 1
elasticsearchRef:
name: data-es
fluentd.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: elastic
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: elastic
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: elastic
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "data-es-es-default.default"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Im unsure about the FLUENT_ELASTICSEARCH_HOST variable. The value i have set is data-es-es-default.default. Because i have a service called data-es-es-default and it's within the default namespace.
Ive setup fluentd and only fluentd using the following guide https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes#step-4-%E2%80%94-creating-the-fluentd-daemonset. An existing elasticsearch was already present in the kubernetes cluser which was setup using ECK elastic link above.
data-es-es-default:
Does not look like its exposed as a http service over 9200
data-es-es-http
It looks like i have second service exposed a http service over 9200, im not sure what the difference between these two are.
Curl the es service from within the pod:
curl -u elastic:mypassword https://data-es-es-http.default:9200 -k
{
"name" : "data-es-es-default-1",
"cluster_name" : "data-es",
"cluster_uuid" : "vPWB0jbBT76Aq6Tbo7ta7w",
"version" : {
"number" : "7.4.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
"build_date" : "2019-10-28T20:40:44.881551Z",
"build_snapshot" : false,
"lucene_version" : "8.2.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I want to deploy my microservice infrastructure as AKS at Azure. I created a node on which 3 microservices run. My API gateway should be able to be addressed with a public IP and data should be forwarded to my other two microservices.
PS /home/jan-marius> kubectl get pods
NAME READY STATUS RESTARTS AGE
apigateway-77875f89cb-qcmnf 1/1 Running 0 18h
contacts-5ccc69f74-x287p 1/1 Running 0 18h
templates-579fc4984b-srv7h 1/1 Running 0 18h
so far so good.After that I created a public IP from the Microsoft Docs and changed my Yaml file as follows.
az network public-ip create \
--resource-group myResourceGroup \
--name myAKSPublicIP \
--sku Standard \
--allocation-method static
apiVersion: apps/v1
kind: Deployment
metadata:
name: apigateway
spec:
replicas: 1
selector:
matchLabels:
app: apigateway
template:
metadata:
labels:
app: apigateway
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: apigateway
image: xxx.azurecr.io/apigateway:11
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8800
name: apigateway
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-dns-label-name: tegos-sendmessage
name: apigateway
spec:
loadBalancerIP: 20.50.10.36
type: LoadBalancer
ports:
- port: 8800
selector:
app: apigateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: contacts
spec:
replicas: 1
selector:
matchLabels:
app: contacts
template:
metadata:
labels:
app: contacts
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: contacts
image: xxx.azurecr.io/contacts:12
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8100
name: contacts
---
apiVersion: v1
kind: Service
metadata:
name: contacts
spec:
ports:
- port: 8100
selector:
app: contacts
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: templates
spec:
replicas: 1
selector:
matchLabels:
app: templates
template:
metadata:
labels:
app: templates
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: templates
image: xxx.azurecr.io/templates:13
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8200
name: templates
---
apiVersion: v1
kind: Service
metadata:
name: templates
spec:
ports:
- port: 8200
selector:
app: templates
However, if I want to call the external IP address with get service, the status is
S /home/jan-marius> kubectl get service apigateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apigateway LoadBalancer 10.0.181.113 <pending> 8800:30817/TCP 19h
PS /home/jan-marius> kubectl describe service apigateway
Name: apigateway
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/azure-dns-label-name":"tegos-sendmessage"},"nam...
service.beta.kubernetes.io/azure-dns-label-name: tegos-sendmessage
Selector: app=apigateway
Type: LoadBalancer
IP: 10.0.181.113
IP: 20.50.10.36
Port: <unset> 8800/TCP
TargetPort: 8800/TCP
NodePort: <unset> 30817/TCP
Endpoints: 10.244.0.14:8800
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 5m (x216 over 17h) service-controller Ensuring load balancer
I read on the net that this error can occur if the locations of the cluster and the external IP or the LoadBalancer types do not match. I am sure that the locations match. I can't be sure about the LoadBalancer types. The external IP SKU is set to standard. However, I have never defined the type of LoadBalancer and I don't know where it can be found. Can someone tell me what I'm doing wrong and how I can provide my web service?
[![enter image description here][1]][1]
PS /home/jan-marius> az aks show -g SendMessageResource -n SendMessageCluster
{
"aadProfile": null,
"addonProfiles": {
"httpapplicationrouting": {
"config": {
"HTTPApplicationRoutingZoneName": "e6e284534ad74c0d9c01.westeurope.aksapp.io"
},
"enabled": true,
"identity": null
},
"omsagent": {
"config": {
"loganalyticsworkspaceresourceid": "/subscriptions/a553134ba7eb-cb83-484d-a05d-44bb70125b8a/resourcegroups/defaultresourcegroup-weu/providers/microsoft.operationalinsights/workspaces/defaultworkspace-a55ba7eb-cb83-484d-a05d-44bb334170125b8a-weu"
},
"enabled": true,
"identity": null
}
},
"agentPoolProfiles": [
{
"availabilityZones": null,
"count": 1,
"enableAutoScaling": null,
"enableNodePublicIp": false,
"maxCount": null,
"maxPods": 110,
"minCount": null,
"mode": "System",
"name": "nodepool1",
"nodeLabels": {},
"nodeTaints": null,
"orchestratorVersion": "1.15.11",
"osDiskSizeGb": 100,
"osType": "Linux",
"provisioningState": "Succeeded",
"scaleSetEvictionPolicy": null,
"scaleSetPriority": null,
"spotMaxPrice": null,
"tags": null,
"type": "VirtualMachineScaleSets",
"vmSize": "Standard_DS2_v2"
}
],
"apiServerAccessProfile": null,
"autoScalerProfile": null,
"diskEncryptionSetId": null,
"dnsPrefix": "SendMessag-SendMessageResou-a55ba7",
"enablePodSecurityPolicy": null,
"enableRbac": true,
"fqdn": "sendmessag-sendmessageresou-a55ba7-14596671.hcp.westeurope.azmk8s.io",
"id": "/subscriptions/a55b3141a7eb-cb83-484d-a05d-44bb70125b8a/resourcegroups/SendMessageResource/providers/Microsoft.ContainerService/managedClusters/SendMessageCluster",
"identity": null,
"identityProfile": null,
"kubernetesVersion": "1.15.11",
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7bzXktZht3zLbHrz3Xpv3VNhtrj/XmBKOIHB0D0ZpBIrsfXcg9veBov8n3cU/F/oKIfqcL2xaoktVwZFz9AjEi7qPXdxrsVLjV2+w0kPyC3ZC5JbtLSO4CFgn0MtclC6mE3OPYczYPoFdZI3/w/AmoZ6TsT7MupkCjKtrYIIaDZ/22zuTMYMvJro7cfjKI5OSR7soybXcoFKw+3tzwO9Mv9lUQr7x0eRCUAUJN6OziEI9p36fLEnNgRG4GiJJZP5aqqsVRUDuu8PF9pO0YLMBr3b2HHgzpDwSebZ6TU//okuc30cqG/2v2LkjBDRGrs5YxiSv3+ejr/9A4XGWup4Z"
}
]
}
},
"location": "westeurope",
"maxAgentPools": 10,
"name": "SendMessageCluster",
"networkProfile": {
"dnsServiceIp": "10.0.0.10",
"dockerBridgeCidr": "172.17.0.1/16",
"loadBalancerProfile": {
"allocatedOutboundPorts": null,
"effectiveOutboundIps": [
{
"id": "/subscriptions/a55b3142a7eb-cb83-484d-a05d-44bb70125b8a/resourceGroups/MC_SendMessageResource_SendMessageCluster_westeurope/providers/Microsoft.Network/publicIPAddresses/988314172c28-d4da-431e-b7f8-5acb08e468b4",
"resourceGroup": "MC_SendMessageResource_SendMessageCluster_westeurope"
}
],
"idleTimeoutInMinutes": null,
"managedOutboundIps": {
"count": 1
},
"outboundIpPrefixes": null,
"outboundIps": null
},
"loadBalancerSku": "Standard",
"networkMode": null,
"networkPlugin": "kubenet",
"networkPolicy": null,
"outboundType": "loadBalancer",
"podCidr": "10.244.0.0/16",
"serviceCidr": "10.0.0.0/16"
},
"nodeResourceGroup": "MC_SendMessageResource_SendMessageCluster_westeurope",
"privateFqdn": null,
"provisioningState": "Succeeded",
"resourceGroup": "SendMessageResource",
"servicePrincipalProfile": {
"clientId": "9009bcd8-4933-4641-b00b-237e157d86589b"
},
"sku": {
"name": "Basic",
"tier": "Free"
},
"type": "Microsoft.ContainerService/ManagedClusters",
"windowsProfile": null
}
if your publicip is in another resource group - you need to specify the resource group for the ip:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-dns-label-name: tegos-sendmessage
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup
name: apigateway
spec:
loadBalancerIP: 20.50.10.36
type: LoadBalancer
ports:
- port: 8800
selector:
app: apigateway
The issue is I cannot create a deployment spec without creating replication controller along with it.I would not like to use replication controller because my app always use only one pod and I would like to set restart policy to never to prevent any terminated container tries to restart.
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
Above is the target yaml file, which I would like to implement and deploy with kubernetes client-go, however client-go currently only provides deployment with replication controller.
// Define Deployments spec.
deploySpec := &v1beta1.Deployment{
TypeMeta: unversioned.TypeMeta{
Kind: "Deployment",
APIVersion: "extensions/v1beta1",
},
ObjectMeta: v1.ObjectMeta{
Name: "binary-search",
},
Spec: v1beta1.DeploymentSpec{
Replicas: int32p(1),
Template: v1.PodTemplateSpec{
ObjectMeta: v1.ObjectMeta{
Name: appName,
Labels: map[string]string{"app": appName},
},
Spec: v1.PodSpec{
Containers: []v1.Container{
v1.Container{
Name: "nginx-container",
Image: "nginx",
VolumeMounts: []v1.VolumeMount{
v1.VolumeMount{
MountPath: "/usr/share/nginx/html",
Name: "shared-data",
},
},
},
v1.Container{
Name: "debian-container",
Image: "debian",
VolumeMounts: []v1.VolumeMount{
v1.VolumeMount{
MountPath: "/pod-data",
Name: "shared-data",
},
},
Command: []string{
"/bin/sh",
},
Args: []string{
"-c",
"echo Hello from the debian container > /pod-data/index1.html",
},
},
},
RestartPolicy: v1.RestartPolicyAlways,
DNSPolicy: v1.DNSClusterFirst,
Volumes: []v1.Volume{
v1.Volume{
Name: "shared-data",
VolumeSource: v1.VolumeSource{
EmptyDir: &v1.EmptyDirVolumeSource{},
},
},
},
},
},
},
}
// Implement deployment update-or-create semantics.
deploy := c.Extensions().Deployments(namespace)
_, err := deploy.Update(deploySpec)
Any suggestion? Many thanks in advance!
If you don't want the service to be restarted, then you can just use the Pod directly. There is no need to use a Deployment, since these only make sense, if you want to have automatic Pod restarts and roll-outs of updates.
The code would look somehow like this (not tested):
podSpec := v1.PodSpec{
Containers: []v1.Container{
v1.Container{
Name: "nginx-container",
Image: "nginx",
VolumeMounts: []v1.VolumeMount{
v1.VolumeMount{
MountPath: "/usr/share/nginx/html",
Name: "shared-data",
},
},
},
v1.Container{
Name: "debian-container",
Image: "debian",
VolumeMounts: []v1.VolumeMount{
v1.VolumeMount{
MountPath: "/pod-data",
Name: "shared-data",
},
},
Command: []string{
"/bin/sh",
},
Args: []string{
"-c",
"echo Hello from the debian container > /pod-data/index1.html",
},
},
},
RestartPolicy: v1.RestartPolicyAlways,
DNSPolicy: v1.DNSClusterFirst,
Volumes: []v1.Volume{
v1.Volume{
Name: "shared-data",
VolumeSource: v1.VolumeSource{
EmptyDir: &v1.EmptyDirVolumeSource{},
},
},
},
}
// Implement deployment update-or-create semantics.
deploy := c.Core().PodsGetter(namespace)
_, err := deploy.Update(podSpec)