AWS user Data custom ami support in amazon eks managed nodegroups - yaml

I'm not able to create node group using yaml file inside yaml file it contains bootstrap.sh to create node group, here the file
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ginny
region: us-west-2
version: '1.17'
managedNodeGroups:
- name: ginny-mng-custom-ami
instanceType: t3.small
desiredCapacity: 2
labels: {role: worker}
ami: ami-0030109261aa0205b
ssh:
publicKeyName: bastion
preBootstrapCommands:
- kubelet --version > /etc/eks/test-preBootstrapCommands
overrideBootstrapCommand: |
#!/bin/bash
set -ex
/etc/eks/bootstrap.sh ginny --kubelet-extra-args "--node-labels=alpha.eksctl.io/cluster-name=ginny,alpha.eksctl.io/nodegroup-name=ginny-mng-custom-ami,eks.amazonaws.com/nodegroup=ginny-mng-custom-ami,eks.amazonaws.com/nodegroup-image=ami-0030109261aa0205b"
[root#ip-1-2-3-4 eks-node-group]# eksctl create nodegroup --config-file maanged-nodegroup.yaml
Error: couldn't create node group filter from command line options: loading config file "maanged-nodegroup.yaml": error converting YAML to JSON: yaml: line 15: mapping values are not allowed in this context

Try this way It should work:
preBootstrapCommands:
- kubelet --version > /etc/eks/test-preBootstrapCommands
overrideBootstrapCommand: |
#!/bin/bash
set -ex
/etc/eks/bootstrap.sh ginny --kubelet-extra-args "--node-labels=alpha.eksctl.io/cluster-name=ginny,alpha.eksctl.io/nodegroup-name=ginny-mng-custom-ami,eks.amazonaws.com/nodegroup=ginny-mng-custom-ami,eks.amazonaws.com/nodegroup-image=ami-0030109261aa0205b"

Related

no matches for kind "Kibana" in version "kibana.k8s.elastic.co/v1"

I'm trying to deploy kibana on eks but getting this error when I run this command kubectl apply -f kibana.yml
error: unable to recognize "kibana.yml": no matches for kind "Kibana" in version "kibana.k8s.elastic.co/v1"
Config file:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
namespace: core-staging
spec:
version: 7.14.0
count: 1
config:
elasticsearch.hosts:
- <elasticsearch host>
elasticsearch.username: <elasticsearch user>
elasticsearch.password: <password>
Probably you need to run this thing first:
kubectl create -f https://download.elastic.co/downloads/eck/2.6.1/crds.yaml
Reference:
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html

Record Kubernetes container resource utilization data

I'm doing a perf test for web server which is deployed on EKS cluster. I'm invoking the server using jmeter with different conditions (like varying thread count, payload size, etc..).
So I want to record kubernetes perf data with the timestamp so that I can analyze these data with my jmeter output (JTL).
I have been digging through the internet to find a way to record kubernetes perf data. But I was unable to find a proper way to do that.
Can experts please provide me a standard way to do this??
Note: I have a multi-container pod also.
In line with #Jonas comment
This is the quickest way of installing Prometheus in you K8 cluster. Added Details in the answer as it was impossible to put the commands in a readable format in Comment.
Add bitnami helm repo.
helm repo add bitnami https://charts.bitnami.com/bitnami
Install helmchart for promethus
helm install my-release bitnami/kube-prometheus
Installation output would be:
C:\Users\ameena\Desktop\shine\Article\K8\promethus>helm install my-release bitnami/kube-prometheus
NAME: my-release
LAST DEPLOYED: Mon Apr 12 12:44:13 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
Watch the Prometheus Operator Deployment status using the command:
kubectl get deploy -w --namespace default -l app.kubernetes.io/name=kube-prometheus-operator,app.kubernetes.io/instance=my-release
Watch the Prometheus StatefulSet status using the command:
kubectl get sts -w --namespace default -l app.kubernetes.io/name=kube-prometheus-prometheus,app.kubernetes.io/instance=my-release
Prometheus can be accessed via port "9090" on the following DNS name from within your cluster:
my-release-kube-prometheus-prometheus.default.svc.cluster.local
To access Prometheus from outside the cluster execute the following commands:
echo "Prometheus URL: http://127.0.0.1:9090/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-prometheus 9090:9090
Watch the Alertmanager StatefulSet status using the command:
kubectl get sts -w --namespace default -l app.kubernetes.io/name=kube-prometheus-alertmanager,app.kubernetes.io/instance=my-release
Alertmanager can be accessed via port "9093" on the following DNS name from within your cluster:
my-release-kube-prometheus-alertmanager.default.svc.cluster.local
To access Alertmanager from outside the cluster execute the following commands:
echo "Alertmanager URL: http://127.0.0.1:9093/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-alertmanager 9093:9093
Follow the commands to forward the UI to localhost.
echo "Prometheus URL: http://127.0.0.1:9090/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-prometheus 9090:9090
Open the UI in browser: http://127.0.0.1:9090/classic/graph
Annotate the pods for sending the metrics.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 4 # Update the replicas from 2 to 4
template:
metadata:
labels:
app: nginx
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9102'
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
In the ui put appropriate filters and start observing the crucial parameter such as memory CPU etc. UI supports autocomplete so it will not be that difficult to figure out things.
Regards

Using aws-cli in cronjob

I'm trying to run aws sts get-caller-identity in a cronjob, however this results in /bin/sh: 1: aws: not found
spec:
containers:
- command:
- /bin/sh
- -c
- aws sts get-caller-identity
As already mentioned in the comments, it seems that the AWS CLI is not installed in the image that your are using for this cronjob. You need to provide more information!
If you are the owner of the used image, just install the AWS CLI within the Dockerfile. If you are not the owner, just create your own image, extend it from the image you are currently using and install the AWS CLI.
For example, if you are using an Alpine based image, just create a Dockerfile
FROM <THE_ORIGINAL_IMAGE>:<TAG>
RUN apk add --no-cache python3 py3-pip && \
pip3 install --upgrade pip && \
pip3 install awscli
Then build the image and push it to DockerHub for an example.
Now you can use this new image in your CronJob resource.
BUT, the next thing is that your CronJob Pod needs access to execute the AWS STS service. There are multiple possibilities to get this done. The best way is to use IRSA (IAM Roles for Service Accounts) Just check this blog article https://aws.amazon.com/de/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/
If you still need help, just provide more details.
Step 1:
You need add secrets key to kubernetes secrets:
kubectl create secret generic aws-credd --from-literal=AWS_SECRET_ACCESS_KEY=xxxxxxxxx --from-literal=AWS_ACCESS_KEY_ID=xxxxx
Step 2: copy this to -> cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: aws-cli-sync
labels:
app: aws-cli-sync
spec:
schedule: "0 17 * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: aws-cli-sync
image: mikesir87/aws-cli
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-cred
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-cred
key: AWS_SECRET_ACCESS_KEY
args:
- /bin/sh
- -c
- date;aws s3 sync s3://xxx-backup-prod s3://elk-xxx-backup
restartPolicy: Never
Step 3: add job in namespaces there you add key
kubectl apply -f ./cronjob.yaml

How can I use the port of a server running on localhost in kubernetes running spring boot app

I am new to Kubernetes and kubectl. I am basically running a GRPC server in my localhost. I would like to use this endpoint in a spring boot app running on kubernetes using kubectl on my mac. If I set the following config in application.yml and run in kubernetes, it doesn't work. The same config works if I run in IDE.
grpc:
client:
local-server:
address: static://localhost:6565
negotiationType: PLAINTEXT
I see some people suggesting port-forward, but it's the other way round (It works when I want to use a port that is already in kubernetes from localhost just like the tomcat server running in kubernetes from a browser on localhost)
apiVersion: apps/v1
kind: Deployment
metadata:
name: testspringconfigvol
labels:
app: testspring
spec:
replicas: 1
selector:
matchLabels:
app: testspringconfigvol
template:
metadata:
labels:
app: testspringconfigvol
spec:
initContainers:
# taken from https://gist.github.com/tallclair/849601a16cebeee581ef2be50c351841
# This container clones the desired git repo to the EmptyDir volume.
- name: git-config
image: alpine/git # Any image with git will do
args:
- clone
- --single-branch
- --
- https://github.com/username/fakeconfig
- /repo # Put it in the volume
securityContext:
runAsUser: 1 # Any non-root user will do. Match to the workload.
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /repo
name: git-config
containers:
- name: testspringconfigvol-cont
image: username/testspring
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /usr/local/lib/config/
name: git-config
volumes:
- name: git-config
emptyDir: {}
What I need in simple terms:
Ports having some server in my localhost:
localhost:6565, localhost:6566, I need to access these ports some how in my kubernetes. Then what should I set it in application.yml config? Will it be the same localhost:6565, localhost:6566 or how-to-get-this-ip:6565, how-to-get-this-ip:6566.
We can get the vmware host ip using minikube with this command minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'". For me it's 10.0.2.2 on Mac. If using Kubernetes on Docker for mac, it's host.docker.internal.
By using these commands, I managed to connect to the services running on host machine from kubernetes.
1) Inside your application.properties define
server.port=8000
2) Create Dockerfile
# Start with a base image containing Java runtime (mine java 8)
FROM openjdk:8u212-jdk-slim
# Add Maintainer Info
LABEL maintainer="vaquar.khan#gmail.com"
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
# The application's jar file (when packaged)
ARG JAR_FILE=target/codestatebkend-0.0.1-SNAPSHOT.jar
# Add the application's jar to the container
ADD ${JAR_FILE} codestatebkend.jar
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/codestatebkend.jar"]
3) Make sure docker is working fine
docker run --rm -p 8080:8080
4)
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
use following command to find the pod name
kubectl get pods
then
kubectl port-forward <pod-name> 8080:8080
Useful links :
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
https://developer.okta.com/blog/2019/05/22/java-microservices-spring-boot-spring-cloud

Configure kubectl to use a remote Kubernetes cluster on windows

I am trying to configure kubectl to use a remote Kubernetes cluster on my local windows machine following the "Install with Chocolatey on Windows" tutorial. However, I am not quite sure how to fill the config file. It should look like this somehow:
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
or this, but I got like no idea how to fill those "variables"
apiVersion: v1
clusters:
- cluster:
server: https://123.456.789.123:9999
certificate-authority-data: yourcertificate
name: your-k8s-cluster-name
contexts:
- context:
cluster: your-k8s-cluster-name
namespace: default
user: admin
name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: admin
user:
token: your-login-token
This variable must be provided by your k8s cluster administrator with special kubeconfig file.
After that you can access to you cluster with --kubeconfig <path to you kubeconfig file> options:
kubectl cluster-info --kubeconfig ./.kube/config -v=7 --insecure-skip-tls-verify=true --alsologtostderr

Resources