kubectl create using inline options in bash script instead of yaml file - bash

I want to convert below line to run as a bash script so that i can call it using jenkins job.
kubectl create -f tiller-sa-crb.yaml
The tiller-sa-crb.yaml is below. How can i convert my above command in bash script. such that my jenkins job calls ./tiller-sa-crb.sh and it does all below. Basically, my end goal is to have a pure shell script and invoke that on jenkins job.
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system

You can also make use of stdin to kubectl create command, like this:
#!/usr/bin/env bash
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
EOF

Figured out how to use kubectl with parameters on command line to create service account and also to create rolebinding using the service account.
#!/bin/bash
#create SA account in one line in namespace kube-system
kubectl create serviceaccount tiller -n kube-system
#create cluster role binding in one line using above serviceaccount and in kube-system namespace
kubectl create clusterrolebinding crb-cluster-admin-test2 --
clusterrole=cluster-admin --serviceaccount=kube-system:tiller

Related

kubectl kustomize add annotation to multiple overlay yaml files

I am trying to merge some annotations in one file to multiple resources to keep it DRY and in order for pods to get information from a vault.
Generally I can add the following code to "mylogger" by using the kind: Deployment (which I presume will only allow me to get the info from this file into only the mylogger resource). After deployment the mylogger pod seems to be working, and can get the vault information.
Other information is that the project follows the base/overlay structure and uses kubectl and kustomize commands.
For the files...
vault-values.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mylogger
spec:
template:
metadata:
annotations:
inject-vault-value1: "path-to-vault-value1"
inject-vault-value2: "path-to-vault-value2"
The mylogger.yml resource file is
apiVersion: apps/v1
kind: Deployment
metadata:
name: mylogger
labels:
app: mylogger
spec:
replicas: 2
selector:
matchLabels:
app: mylogger
template:
metadata:
labels:
app: mylogger
spec:
initContainers:
.... and rest of file here
doing kubectl kustomize .../overlay/dev > manifest.yml
I can see the desired result in my manifest.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: mylogger
labels:
app: mylogger
spec:
replicas: 1
selector:
matchLabels:
app: mylogger
template:
metadata:
annotations:
inject-vault-value1: "path-to-vault-value1"
inject-vault-value2: "path-to-vault-value2"
labels:
app: mylogger
spec:
initContainers:
... rest if file
The part under spec > template > metadata > annotations > inject-vault-value1 is there.
Is it possible to use the vault-value.yml file and insert its contents into for example myjob resource? Basically the part from spec and down, to its annotations
myjob.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myjob
spec:
replicas: 1
template:
spec:
containers:
- name: myjob
env:
- name: random__env__variable
value: false
...rest of file here
Note: I want to use the file in the overlay folder as it has the correct vault information for that particular environment. I have nothing in base folder concerning the vault information or the vault yaml file.
Thought the command "patchesStrategicMerge" would come in handy, but for the kustomize command it seems only doable for a base/overlay contents
How to best accomplish your goal depends on how your project is structured, but one option is to use a Kustomize patch, like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# This points to where you're loading your `mylogger` and `myjob` deployments
resources:
- ...
patches:
- target:
kind: Deployment
patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: this-is-ignored
spec:
template:
metadata:
annotations:
inject-vault-value1: "path-to-vault-value1"
inject-vault-value2: "path-to-vault-value2"
This will apply your two custom annotations to all deployments generated by this kustomization.yaml file. If you need to limit it to specific deployments, you can use a pattern expression or label selector to match the appropriate objects.

Fleet Server In Elastic Error : elastic-agent-cluster-leader is forbidden

We are setting up a fleet server in Kubernetes.
It has been given a CA and states its running but we cannot shell into it, and the logs are nothing but the following:
E0817 09:12:10.074969 927 leaderelection.go:330] error retrieving
resource lock default/elastic-agent-cluster-leader:
leases.coordination.k8s.io "elastic-agent-cluster-leader" is
forbidden: User "system:serviceaccount:default:elastic-agent" cannot
get resource "leases" in API group "coordination.k8s.io" in the
namespace "default"
I can find very little information on this ever happening let alone a resolution. Any information pointing to a possible resolution would be massively helpful!
You need to make sure that you have applied the ServiceAccount, ClusterRoles and ClusterRoleBindings from the setup files.
An example of these can be found in the quickstart documentation.
https://www.elastic.co/guide/en/cloud-on-k8s/2.2/k8s-elastic-agent-fleet-quickstart.html
Service Account
kind: ServiceAccount
metadata:
name: elastic-agent
namespace: default
Cluster Role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elastic-agent
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- pods
- nodes
- namespaces
verbs:
- get
- watch
- list
- apiGroups: ["coordination.k8s.io"]
resources:
- leases
verbs:
- get
- create
- update
Cluster Role Binding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: default
roleRef:
kind: ClusterRole
name: elastic-agent
apiGroup: rbac.authorization.k8s.io

Kubernetes Dashboard - Internal error (500): Not enough data to create auth info structure

I have Kubernetes with ClusterRoles defined for my users and permissions by (RoleBindings) namespaces.
I want these users could be accessed into the Kubernetes Dashboard with custom perms. However, when they try to log in when using kubeconfig option that's got this message:
"Internal error (500): Not enough data to create auth info structure."
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md -- This guide is only for creating ADMIN users, not users with custom perms or without privileges... (edited)
Update SOLVED:
You have to do this:
Create ServiceAccount per user
apiVersion: v1
kind: ServiceAccount
metadata:
name: NAME-user
namespace: kubernetes-dashboard
Adapt the RoleBinding adding this SA
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: PUT YOUR CR HERE
namespace: PUT YOUR NS HERE
subjects:
- kind: User
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
- kind: ServiceAccount
name: NAME-user
namespace: kubernetes-dashboard
roleRef:
kind: ClusterRole
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
Get the token:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/NAME-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
Add token into your kubeconfig file. Your kb should be contain something like this:
apiVersion: v1
clusters:
- cluster:
server: https://XXXX
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: YOUR UER
name: kubernetes
current-context: "kubernetes"
kind: Config
preferences: {}
users:
- name: YOUR USER
user:
client-certificate-data: CODED
client-key-data: CODED
token: CODED ---> ADD TOKEN HERE
Login

Image pulling issue on Kubernetes from private repository

I created registry credits and when I apply on pod like this:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: registry.io.io/simple-node
imagePullSecrets:
- name: regcred
it works succesfly pull image
But if I try to do this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node123
namespace: node123
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
selector:
matchLabels:
name: node123
template:
metadata:
labels:
name: node123
spec:
containers:
- name: node123
image: registry.io.io/simple-node
ports:
- containerPort: 3000
imagePullSecrets:
- name: regcred
On pod will get error: ImagePullBackOff
when I describe it getting
Failed to pull image "registry.io.io/simple-node": rpc error: code =
Unknown desc = Error response from daemon: Get
https://registry.io.io/v2/simple-node/manifests/latest: no basic auth
credentials
Anyone know how to solve this issue?
We are always running images from private registry. And this checklist might help you :
Put your params in env variable in your terminal to have single source of truth:
export DOCKER_HOST=registry.io.io
export DOCKER_USER=<your-user>
export DOCKER_PASS=<your-pass>
Make sure that you can authenticate & the image really exist
echo $DOCKER_PASS | docker login -u$DOCKER_USER --password-stdin $DOCKER_HOST
docker pull ${DOCKER_HOST}/simple-node
Make sure that you created the Dockerconfig secret in the same namespace of pod/deployment;
namespace=mynamespace # default
kubectl -n ${namespace} create secret docker-registry regcred \
--docker-server=${DOCKER_HOST} \
--docker-username=${DOCKER_USER} \
--docker-password=${DOCKER_PASS} \
--docker-email=anything#will.work.com
Patch the service account used by the Pod with the secret
namespace=mynamespace
kubectl -n ${namespace} patch serviceaccount default \
-p '{"imagePullSecrets": [{"name": "regcred"}]}'
# if the pod use another service account,
# replace "default" by the relevant service account
or
Add imagePullSecrets in the pod :
imagePullSecrets:
- name: regcred
containers:
- ....

Access Kubernetes Dashboard on EC2 Remotely

I setup a K8s cluster in EC2 and launched kubernetes dashboard by following these links:
https://github.com/kubernetes/dashboard
https://github.com/kubernetes/dashboard/wiki/Access-control
Here are commands I ran:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
As I setup a few IPs into security group for the EC2 instances, I assume only those IPs can access the dashboard, so no worry about the security here.
When I try to access the dashboard using:
http://<My_IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Now what's the easiest way to access the dashboard?
I noticed there are several related questions, but seems no one really answer it.
Thanks a lot.
P.S. Just noticed some errors in Dashboard log. Something wrong with dashboard running?
You can use service with type:Loadbalancer and use loadBalancerSourceRanges: to limits access to your dashboard.
Have you done the ClusterRoleBinding for serviceaccount kubernetes-dashboard.? If not, apply the below yaml file changes, so that the serviceaccount will get cluster-admin roles to access all kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
It depends on what ServiceAccount or User you are using to connect to the kube-apiserver. If you want to have access without look for details of policy, literally give access to everything, your RBAC file can look similar to this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: my-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: <your-user-from-your-~/.kube/config>
Than pass a command:
kubectl apply -f <filename>
Second approach:
kubectl create clusterrolebinding my-cluster-admin --clusterrole=cluster-admin --user=<your-user-from-your-~/.kube/config>
You can also use a Group or ServiceAccount in User field. Look for official documentation about RBAC Authorization here.
Also what I found is great tutorial if you wanna take it step by step.

Resources