I have deployed the Bitnami helm chart of elasticsearch on the Kubernetes environment.
https://github.com/bitnami/charts/tree/master/bitnami/elasticsearch
Unfortunately, I am getting the following error for the coordinating-only pod. However, the cluster is restricted.
Pods "elasticsearch-elasticsearch-coordinating-only-5b57786cf6-" is forbidden: unable to validate against any pod security policy:
[spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]; Deployment does not have minimum availability.
I there anything I need to adapt/add-in default values.yaml?
Any suggestion to get rid of this error?
Thanks.
You can't validate if your cluster is restricted with some security policy. In your situation someone (assuming administrator) has blocked the option to run privileged containers for you.
Here's an example of how pod security policy blocks privileged containers:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false # Don't allow privileged pods!
seLinux:
rule: RunAsAny
----
What is require for you is to have appropriate Role with a PodSecurityPolicy resource and RoleBinding that will allow you to run privileged containers.
This is very well explained in kubernetes documentation at Enabling pod security policy
so the solution was to set the following parameter in values.yaml file then deploy simply.
Don't need to create any role or pod security policy.
sysctlImage:
enabled: false
curator:
enabled: true
rbac:
# Specifies whether RBAC should be enabled
enabled: true
psp:
# Specifies whether a podsecuritypolicy should be created
create: true
Also run this command on each node:
sysctl -w vm.max_map_count=262144 && sysctl -w fs.file-max=65536
Related
I'm deploying Sonarqube via official helm charts and using following ingress configuration:
ingress:
enabled: true
# Used to create an Ingress record.
hosts:
- name: sonar.<company>.com
# Different clouds or configurations might need /* as the default path
path: /
# For additional control over serviceName and servicePort
# serviceName: someService
# servicePort: somePort
# the pathType can be one of the following values: Exact|Prefix|ImplementationSpecific(default)
# pathType: ImplementationSpecific
annotations:
# kubernetes.io/tls-acme: "true"
# nginx.ingress.kubernetes.io/proxy-body-size: "64m"
# Set the ingressClassName on the ingress record
# ingressClassName: nginx
# Additional labels for Ingress manifest file
# labels:
# traffic-type: external
# traffic-type: internal
tls:
# Secrets must be manually created in the namespace. To generate a self-signed certificate (and private key) and then create the secret in the cluster please refer to official documentation available at https://kubernetes.github.io/ingress-nginx/user-guide/tls/#tls-secrets
- secretName: sonar-server-tls
hosts:
- sonar.<company>.com
Sonar is working when using: http://sonar.<company>.com:443 but without the certificate. https://sonar.<company>.com doesnt work. I cannot find much related to this specific topic. Some questions:
Do I have to use nginx here? If yes, is it recommended to use nginx.enabled: true to make stuff working smooth? That secret name is valid, exists and its found during deployment.
Thanks for any advice.
Using HTTP instead of HTTPS is not recommended, as it will not provide the same level of security. It is possible to use Nginx to enable HTTPS,you will likely need to use nginx to act as a reverse proxy for the sonar..com domain, and then configure it to use the secret containing the certificate. It is generally recommended to use Nginx's nginx.enabled: true option to ensure that the setup is working properly, which will then allow you to set up the nginx configuration and use the secret name provided.. Once this is done, you should be able to access Sonar securely on the HTTPS address you specified.
For more information follow this doc.
i'm configuring Elastic Cloud agent on Azure AKS with pool system and user. On system pool i configured CriticalAddonsOnly=true:NoSchedule taint to prevent that pod application run there. I installed the Elastic Cloud agent but i'm noticing that DaemonSet trying to run pods on that system pool without success. I tried to set on yaml config of agent the label CriticalAddonsOnly=true:NoSchedule but i got same errors. Is there a way to force deploy on system pool or to exclude ElasticCloud pods deploy on that pool?
Here how setup yaml:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: CriticalAddonsOnly
operator: "Exists"
effect: NoSchedule
Regards
node-role.kubernetes.io/control-plane & node-role.kubernetes.io/master are no taints for AKS nodes. These are node labels. So please remove them from the toleration spec.
Furthermore specifying a toleration does not guarantee scheduling onto tolerated nodes. It just marks that the node should not accept any pods that do not tolerate the taints. As your 2nd node pool seems not to be tainted, the scheduler just drops your pods there.
You could now add taints to your other nodepools or more easier just specify a node selector =
nodeSelector:
kubernetes.azure.com/mode: system
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
effect: "NoSchedule"
The same could be also achieved with Node Affinity. You should check the Helm Chart or your deployment option if nodeSelector or NodeAffinity is available.
I deploy a Elasticsearch cluster to EKS, below is the spec
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elk
spec:
version: 7.15.2
serviceAccountName: docker-sa
http:
tls:
selfSignedCertificate:
disabled: true
nodeSets:
- name: node
count: 3
config:
...
I can see it has been deployed correctly and all pods are running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
elk-es-node-0 1/1 Running 0 19h
elk-es-node-1 1/1 Running 0 19h
elk-es-node-2 1/1 Running 0 11h
But I can't restart the deployment Elasticsearch,
$ kubectl rollout restart Elasticsearch elk-es-node
Error from server (NotFound): elasticsearches.elasticsearch.k8s.elastic.co "elk-es-node" not found
The Elasticsearch is using statefulset so I tried to restart statefulset,
$ kubectl rollout restart statefulset elk-es-node
statefulset.apps/elk-es-node restarted
the above command says restarted, but the actual pods are not restarting.
what is the right way to restart a custom kind in K8S?
Use - kubectl get all
To identify if the resource created is a deployment or a statefulset -
use -n <namespace"> along with the above command, if you are working in a specific namespace.
Assuming, you are using a statefulset, the issue below command to understand the properties in which it is configured.
kubectl get statefulset <statefulset-name"> -o yaml > statefulsetContent.yaml
this will create a yaml file names statefulsetContent.yaml in same directory.
you can use it to explore different options configured in the statefulset.
Check for .spec.updateStrategy in the yaml file. Based on this we can identify its update strategy.
Below is from the official documentation
There are two possible values:
OnDelete
When a StatefulSet's .spec.updateStrategy.type is set to OnDelete, the StatefulSet controller will not automatically update the Pods in a StatefulSet. Users must manually delete Pods to cause the controller to create new Pods that reflect modifications made to a StatefulSet's .spec.template.
RollingUpdate
The RollingUpdate update strategy implements automated, rolling update for the Pods in a StatefulSet. This is the default update strategy.
As a work around, you can try to scale down/up the statefulset.
kubectl scale sts <statefulset-name"> --replicas=<count">
With ECK as the operator, you do not need to use rollout restart. Apply your updated Elasticsearch spec and the operator will perform rolling update for you. If for any reason you need to restart a pod, you use kubectl delete pod <es pod> -n <your es namespace> to remove the pod and the operator will spin up new one for you.
I've started a minikube (using Kubernetes 1.18.3) to test out ECK and specifically packetbeat. The minikube profile is called "packetbeat" (important, as that's the hostname for the Virtualbox VM as well) and I followed the ECK quickstart to get it up and running. ElasticSearch (single node) and Kibana are running fine and packetbeat is gathering flows as well, however, I'm unable to make it add the Kubernetes metadata to the fields.
I'm working in the default namespace and created a ClusterRoleBinding to view for the default ServiceAccount in the namespace. This is working well, if I do not do that, packetbeat will report it is unable to list the Pods on the API server.
This is the Beat config I'm using to make ECK deploy packetbeat:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: packetbeat
spec:
type: packetbeat
version: 7.9.0
elasticsearchRef:
name: quickstart
kibanaRef:
name: kibana
config:
packetbeat.interfaces.device: any
packetbeat.protocols:
- type: http
ports: [80, 8000, 8080, 9200]
- type: tls
ports: [443]
packetbeat.flows:
timeout: 30s
period: 10s
processors:
- add_kubernetes_metadata: {}
daemonSet:
podTemplate:
spec:
terminationGracePeriodSeconds: 30
hostNetwork: true
automountServiceAccountToken: true # some older Beat versions are depending on this settings presence in k8s context
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: packetbeat
securityContext:
runAsUser: 0
capabilities:
add:
- NET_ADMIN
(This is mostly a slightly modified example from the ECK example page.) However, this is not working at all. I tried it with "add_kubernetes_metadata: {}" first, but that will error with the message:
2020-08-19T14:23:38.550Z ERROR [kubernetes] kubernetes/util.go:117
kubernetes: Querying for pod failed with error: pods "packetbeat" not
found {"libbeat.processor": "add_kubernetes_metadata"}
This message goes away when I add the "host: packetbeat". I'm no longer getting an error now, but I'm not getting the Kubernetes metadata either. I'm mostly interested in the namespace tag, but I'm not getting any. I do not see any additional errors in the log and it just reports monitoring details every 30 seconds at the moment.
What am I doing wrong? Any more information I can provide to help me debug this?
So the docs are just unclear. Although they do not explicitely state it, you do need to add indexers and matchers. My understanding was that there are "default" ones (as you can disable those), but that does not seem to be the case. Adding the indexers and matchers as per the example in the docs makes the Kubernetes metadata part of the data.
I am using golang lib client-go to connect to a running local kubrenets. To start with I took code from the example: out-of-cluster-client-configuration.
Running a code like this:
$ KUBERNETES_SERVICE_HOST=localhost KUBERNETES_SERVICE_PORT=6443 go run ./main.go results in following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
goroutine 1 [running]:
/var/run/secrets/kubernetes.io/serviceaccount/
I am not quite sure which part of configuration I am missing. I've researched following links :
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
But with no luck.
I guess I need to either let the client-go know which token/serviceAccount to use, or configure kubectl in a way that everyone can connect to its api.
Here's status of my kubectl though some commands results:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl get serviceAccounts
NAME SECRETS AGE
default 1 3d
test-user 1 1d
$ kubectl describe serviceaccount test-user
Name: test-user
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: test-user-token-hxcsk
Tokens: test-user-token-hxcsk
Events: <none>
$ kubectl get secret test-user-token-hxcsk -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0......=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSX......=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: test-user
kubernetes.io/service-account.uid: 984b359a-6bd3-11e8-8600-XXXXXXX
creationTimestamp: 2018-06-09T10:55:17Z
name: test-user-token-hxcsk
namespace: default
resourceVersion: "110618"
selfLink: /api/v1/namespaces/default/secrets/test-user-token-hxcsk
uid: 98550de5-6bd3-11e8-8600-XXXXXX
type: kubernetes.io/service-account-token
This answer could be a little outdated but I will try to give more perspective/baseline for future readers that encounter the same/similar problem.
TL;DR
The following error:
panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
is most likely connected with the lack of token in the /var/run/secrets/kubernetes.io/serviceaccount location when using in-cluster-client-configuration. Also, it could be related to the fact of using in-cluster-client-configuration code outside of the cluster (for example running this code directly on a laptop or in pure Docker container).
You can check following commands to troubleshoot your issue further (assuming this code is running inside a Pod):
$ kubectl get serviceaccount X -o yaml:
look for: automountServiceAccountToken: false
$ kubectl describe pod XYZ
look for: containers.mounts and volumeMounts where Secret is mounted
Citing the official documentation:
Authenticating inside the cluster
This example shows you how to configure a client with client-go to authenticate to the Kubernetes API from an application running inside the Kubernetes cluster.
client-go uses the Service Account token mounted inside the Pod at the /var/run/secrets/kubernetes.io/serviceaccount path when the rest.InClusterConfig() is used.
-- Github.com: Kubernetes: client-go: Examples: in cluster client configuration
If you are authenticating to the Kubernetes API with ~/.kube/config you should be using the out-of-cluster-client-configuration.
Additional information:
I've added additional information for more reference on further troubleshooting when the code is run inside of a Pod.
automountServiceAccountToken: false
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: go-serviceaccount
automountServiceAccountToken: false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: sdk
spec:
serviceAccountName: go-serviceaccount
automountServiceAccountToken: false
-- Kubernetes.io: Docs: Tasks: Configure pod container: Configure service account
$ kubectl describe pod XYZ:
When the servicAccount token is mounted, the Pod definition should look like this:
<-- OMITTED -->
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from go-serviceaccount-token-4rst8 (ro)
<-- OMITTED -->
Volumes:
go-serviceaccount-token-4rst8:
Type: Secret (a volume populated by a Secret)
SecretName: go-serviceaccount-token-4rst8
Optional: false
If it's not:
<-- OMITTED -->
Mounts: <none>
<-- OMITTED -->
Volumes: <none>
Additional resources:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication
Just to make it clear, in case it helps you further debug it: the problem has nothing to do with Go or your code, and everything to do with the Kubernetes node not being able to get a token from the Kubernetes master.
In kubectl config view, clusters.cluster.server should probably point at an IP address that the node can reach.
It needs to access the CA, i.e., the master, in order to provide that token, and I'm guessing it fails to for that reason.
kubectl describe <your_pod_name> would probably tell you what the problem was acquiring the token.
Since you assumed the problem was Go/your code and focused on that, you neglected to provide more information about your Kubernetes setup, which makes it more difficult for me to give you a better answer than my guess above ;-)
But I hope it helps!