Asciidoc autonumbering callouts with modified start number - asciidoc

I'm currently using Asciidoc to document a YAML code snippet with auto-numbered callouts. The problem I'm having is I'd like to start the callouts at "2" instead of the default "1". This would allow me to reuse the first callout throughout the snippet, since this doesn't work for auto-numbering. I'm able to start the callout descriptions by adding [start=2] but can't seem to get that to work in the code snippet.
Here's an example:
[source,yaml, start=2]
----
apiVersion: v1
baseDomain: example.com <1>
controlPlane: <.>
hyperthreading: Enabled <.> <.>
name: master
platform:
azure:
osDisk: <1>
diskSizeGB: 1024 <.>
type: Standard_D8s_v3
replicas: 3
azureVar: 78 <.>
test: z <.>
----
[start=2]
<1> baseDomain and osDisk
<.> controlPlane
<.> h1
<.> h2
<.> diskSizeGB
<.> Gov
<.> test
I've attached the screenshot of rendered HTML below: You can see the controlPlane code callout starting at "1" instead of the preferred "2", whereas the callout description worked as expected. Any idea if this is possible?

Related

Symbols << & * in .gitlab-ci.yml [duplicate]

I recently came across this and was wondering what &django means
version: '2'
services:
django: &django
I can't see anything in the docs related to this.
These are a YAML feature called anchors, and are not particular to Docker Compose. I would suggest you have a look at below URL for more details
https://learnxinyminutes.com/docs/yaml/
Follow the section EXTRA YAML FEATURES
YAML also has a handy feature called 'anchors', which let you easily duplicate
content across your document. Both of these keys will have the same value:
anchored_content: &anchor_name This string will appear as the value of two keys.
other_anchor: *anchor_name
Anchors can be used to duplicate/inherit properties
base: &base
name: Everyone has same name
foo: &foo
<<: *base
age: 10
bar: &bar
<<: *base
age: 20
To complement Tarun's answer, & identifies an anchor and * is an alias referring back to the anchor. It is described as the following in the YAML specification:
In the representation graph, a node may appear in more than one
collection. When serializing such data, the first occurrence of the
node is identified by an anchor. Each subsequent occurrence is
serialized as an alias node which refers back to this anchor.
Sidenote:
For those who want to start using anchors in your docker-compose files, there is more powerful way to make re-usable anchors by using docker-compose YAML extension fields.
version: "3.4"
# x-docker-data is an extension and when docker-compose
# parses the YAML, it will not do anything with it
x-docker-data: &docker-file-info
build:
context: .
dockerfile: Dockerfile
services:
some_service_a:
<<: *docker-file-info
restart: on-failure
ports:
- 8080:9090
some_service_b:
<<: *docker-file-info
restart: on-failure
ports:
- 8080:9595

Loki Promtail dynamic label values extraction from logs - scrap-config

The Log entry looks like this
2022-05-12 10:32:19,659 - root:loadrequestheaders:33 - INFO: otds=XYZ, domain=XYZ, username=user, origin=10.98.76.67
I tried to add the promtail configuration to scrap log entry to get new labels :
Updated ConfigMap - loki-promtail with below content
- job_name: kubernetes-pods-static
pipeline_stages:
- match:
selector: '{container="model"}'
stages:
- regex:
expression: '\.*username=(?P<username>\S+).*origin=(?P<origin>\S+)'
- labels:
username:
origin:
I could see loki pods are running successfully with above config map update but I don't see the entries "username and origin" - in the labels list
Any suggestion on how to add new labels in loki datasource.
enter image description here

How To Solve Kibana Dashboard "404" display error?

I am trying to set up authentication with Okta for elastic stack on google cloud. The link from OKTA has the first step to route the cluster address through a certain endpoint and path as shown here
Well, I have an ingress shown thus:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- x.x.x.net
rules:
- host: x.x.x.net
http:
paths:
- path: /api/security/v1/saml
pathType: Prefix
backend:
service:
name: kibana-kibana
port:
number: 5601
But every time, I tried to check the hosts from the browser with the path identified as well, it gives an error page as shown thus:
{"statusCode":404,"error":"Not Found","message":"Not Found"}
WHat could possibly cause this that I cannot access the hosts from my browser meanwhile if I remove the path, the dashboard is accessible.
Thanks

Error deploying SonarCube on an OpenShift Cluster

I have added a SonarQube operator (https://github.com/RedHatGov/sonarqube-operator) in my cluster and when I want to let a Sonar instance out of the operator, the container terminates with the Fail Message:
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [253832] is too low, increase to at least [262144]
The problem lies in the fact that the operator refers to the label;
tuned.openshift.io/elasticsearch
which leaves me to do the necessary tuning, but there is no Elasticsearch operator or tuning on this pristine cluster.
I have created a tuning for Sonar but for whatever reason the thing does not want to be pulled. It currently looks like this:
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: sonarqube
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=A custom OpenShift SonarQube profile
include=openshift-control-plane
[sysctl]
vm.max_map_count=262144
name: openshift-sonarqube
recommend:
- match:
- label: tuned.openshift.io/sonarqube
match:
- label: node-role.kubernetes.io/master
- label: node-role.kubernetes.io/infra
type: pod
priority: 10
profile: openshift-sonarqube
and on the deployment I give the label;
tuned.openshift.io/sonarqube
But for whatever reason it is not pulled and I still get the above error message. Does anyone have an idea, and/or are these necessary steps? I followed the documentation > (https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/scalability_and_performance/using-node-tuning-operator) and it didn't work with the customized example. I set the match in match again, but that didn't work either.
Any suggestions?
Maybe try this:
oc create -f - <<EOF
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: openshift-elasticsearch
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=Optimize systems running ES on OpenShift nodes
include=openshift-node
[sysctl]
vm.max_map_count=262144
name: openshift-elasticsearch
recommend:
- match:
- label: tuned.openshift.io/elasticsearch
type: pod
priority: 20
profile: openshift-elasticsearch
EOF
(Got it from: https://github.com/openshift/cluster-node-tuning-operator/blob/master/examples/elasticsearch.yaml)

How do I expose my service to the world?

Instead of working with google cloud, I decided to set up Kuberenetes on my own machine. I made a docker image of my hello-world web server. I set up hello-controller.yaml:
apiversion: v1
kind: ReplicationController
metadata:
name: hello
labels:
name: hello
spec:
replicas: 1
selector:
name: hello
template:
metadata:
labels:
name: hello
spec:
containers:
- name: hello
image: flaggy/hello
ports:
- containerPort: 8888
Now I want to expose the service to the world. I don't think vagrant provider has a load balancer (which seems to be the best way to do it). So I tried with NodePort service type. However, the newly created NodePort does not seem to be listened on any IP I try. Here's hello-service.yaml:
apiversion: v1
kind: Service
metadata:
name: hello
labels:
name: hello
spec:
type: NodePort
selector:
name: hello
ports:
- port: 8888
If I log into my minion I can access port 8888
$ curl 10.246.1.3:8888
Hello!
When I describe my service this is what I get:
$ kubectl.sh describe service/hello
W0628 15:20:45.049822 1245 request.go:302] field selector: v1 - events - involvedObject.name - hello: need to check if this is versioned correctly.
W0628 15:20:45.049874 1245 request.go:302] field selector: v1 - events - involvedObject.namespace - default: need to check if this is versioned correctly.
W0628 15:20:45.049882 1245 request.go:302] field selector: v1 - events - involvedObject.kind - Service: need to check if this is versioned correctly.
W0628 15:20:45.049887 1245 request.go:302] field selector: v1 - events - involvedObject.uid - 2c0005e7-1dc2-11e5-8369-0800279dd272: need to check if this is versioned correctly.
Name: hello
Labels: name=hello
Selector: name=hello
Type: NodePort
IP: 10.247.5.87
Port: <unnamed> 8888/TCP
NodePort: <unnamed> 31423/TCP
Endpoints: 10.246.1.3:8888
Session Affinity: None
No events.
I cannot find anyone listening on port 31423 which, as I gather, should be the external port for my service. I am also puzzled about IP 10.247.5.87.
I note this
$ kubectl.sh get nodes
NAME LABELS STATUS
10.245.1.3 kubernetes.io/hostname=10.245.1.3 Ready
Why is that IP different from what I see on describe for the service? I tried accessing both IPs on my host:
$ curl 10.245.1.3:31423
curl: (7) Failed to connect to 10.245.1.3 port 31423: Connection refused
$ curl 10.247.5.87:31423
curl: (7) Failed to connect to 10.247.5.87 port 31423: No route to host
$
So IP 10.245.1.3 is accessible, although port 31423 is not binded to it. I tried routing 10.247.5.87 to vboxnet1, but it didn't change anything:
$ sudo route add -net 10.247.5.87 netmask 255.255.255.255 vboxnet1
$ curl 10.247.5.87:31423
curl: (7) Failed to connect to 10.247.5.87 port 31423: No route to host
If I do sudo netstat -anp | grep 31423 on the minion nothing comes up. Strangely, nothing comes up if I do sudo netstat -anp | grep 8888 either.
There must be either some iptables magic or some interface in promiscuous mode being abused.
Would it be this difficult to get things working on bare metal as well? I haven't tried AWS provider either, but I am getting worried.
A few things.
Your single pod is 10.246.1.3:8888 - that seems to work.
Your service is 10.247.5.87:8888 - that should work as long as you are within your cluster (it's virtual - you will not see it in netstat). This is the first thing to verify.
Your node is 10.245.1.3 and your service should ALSO be on 10.245.1.3:31423 - this is the part that does not seem to be working correctly. Like service IPs, this binding is virtual - it should show up in iptables-save but not netstat. If you log into your node (minion), can you curl localhost:31423 ?
You might find this doc useful: https://github.com/thockin/kubernetes/blob/docs-debug-svcs/docs/debugging-services.md

Resources