I would like to set up Ambassador as an API Gateway for kubernetes using terraform. There are several ways how to configure Ambassador. The recommended way, according to documentation, is by using kubernetes annotations for each service that is routed and exposed outside the cluster. This is done easily using kubernetes yaml configuration:
kind: Service
apiVersion: v1
metadata:
name: my-service
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: my_service_mapping
prefix: /my-service/
service: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
The getambassador.io/config field's value starting with | suggest it is a multiline string value. How to achieve the same thing using terraform HCL?
Terraform documentation contains a section about multiline strings using <<EOF your multiline string EOF:
resource "kubernetes_service" "my-service" {
"metadata" {
name = "my-service"
annotations {
"getambassador.io/config" = <<EOF
apiVersion: ambassador/v0
kind: Mapping
name: my_service_mapping
prefix: /my-service/
service: my-service
EOF
}
}
"spec" {
selector {
app = "MyApp"
}
port {
protocol = "TCP"
port = 80
target_port = "9376"
}
}
}
Make sure there is no triple dash (---) from yaml configuration. Terraform parses it incorrectly.
Related
I have deployed Logstash and elasticsearch pod on EKS cluster. When I am checking the logs for logstash pod it is showing unreachable elasticserach server. Though my elasticsearch is up and running. Please find below yaml files and log error.
configMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "logstash-configmap-development"
namespace: "development"
labels:
app: "logstash-development"
data:
logstash.conf: |-
input {
http {
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["https://my-server.com/elasticsearch-development/"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "logstash-development"
namespace: "development"
spec:
selector:
matchLabels:
app: "logstash-development"
replicas: 1
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "logstash-development"
spec:
containers:
-
name: "logstash-development"
image: "logstash:7.10.2"
imagePullPolicy: "Always"
env:
-
name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS"
value: "https://my-server.com/elasticsearch-development/"
-
name: "XPACK_MONITORING_ELASTICSEARCH_URL"
value: "https://my-server.com/elasticsearch-development/"
-
name: "SERVER_BASEPATH"
value: "logstash-development"
securityContext:
privileged: true
ports:
-
containerPort: 8080
protocol: TCP
volumeMounts:
-
name: "logstash-conf-volume"
mountPath: "/usr/share/logstash/pipeline/"
volumes:
-
name: "logstash-conf-volume"
configMap:
name: "logstash-configmap-development"
items:
- key: "logstash.conf"
path: "logstash.conf"
imagePullSecrets:
-
name: "logstash"
service.yaml
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "logstash-development"
namespace: "development"
labels:
app: "logstash-development"
spec:
ports:
-
port: 55770
targetPort: 8080
selector:
app: "logstash-development"
Logstash pod log error
[2021-06-09T08:22:38,708][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://my-server.com/elasticsearch-development/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://my-server.com/elasticsearch-development/][Manticore::ConnectTimeout] connect timed out"}
Note:- Elasticsearch is up and running. And when I hit the logstash url it is giving as status ok.
I have checked with elasticsearch cluster-ip, their logstash is able to connect with Elasticsearch, but when I am giving ingress path url it is not able to connect to elasticsearch.
Also from the logs, I noticed it is taking incorrect url for elasticsearch.
My elasticsearch url is something like this:- https://my-server.com/elasticserach
but instead logstash is looking for https://my-server.com:9200/elasticsearch
With this url (https://my-server.com:9200/elasticsearch) elasticsearch is not accessible as a result it is giving connection timeout.
Can someone tell why it is taking (https://my-server.com:9200/elasticsearch) and not (https://my-server.com/elasticsearch)
I am now able to connect logstash with elasticsearch, if you are using elasticsearch with dns name, logstash by default will take the port of elasticsearch as 9200, so in my case it was taking the elasticsearch url as https://my-server.com:9200/elasticsearch-development/. But with that url, elasticsearch was not accessible, it was only accessible with (https://myserver.com/elasticsearch-development/). So I need to add the https port i.e 443 in my elasticsearch url, through which the logstash would be able to connect to elasticserach (https://my-server.com:443/elasticsearch-development/)
Long story short:-
In deployment.yaml file under env variable XPACK_MONITORING_ELASTICSEARCH_HOSTS and XPACK_MONITORING_ELASTICSEARCH_URL given the value as https://my-server.com:443/elasticsearch-development/
Same value was given in logstash.conf file.
I'm running this tutorial https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html and found that the elasticsearch operator comes included with a pre-defined secret which is accessed through kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'. I was wondering how I can access it in a manifest file for a pod that will make use of this as an env var. The pod's manifest is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-depl
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
spec:
containers:
- name: user
image: reactor/user
env:
- name: PORT
value: "3000"
- name: ES_SECRET
valueFrom:
secretKeyRef:
name: quickstart-es-elastic-user
key: { { .data.elastic } }
---
apiVersion: v1
kind: Service
metadata:
name: user-svc
spec:
selector:
app: user
ports:
- name: user
protocol: TCP
port: 3000
targetPort: 3000
When trying to define ES_SECRET as I did in this manifest, I get this error message: invalid map key: map[interface {}]interface {}{\".data.elastic\":interface {}(nil)}\n. Any help on resolving this would be much appreciated.
The secret returned via API (kubectl get secret ...) is a JSON-structure, where there:
{
"data": {
"elastic": "base64 encoded string"
}
}
So you just need to replace
key: { { .data.elastic } }
with
key: elastic
since it's secretKeyReference (e.g. you refer a value in some key in data (=contents) of some secret, which name you specified above). No need to worry about base64 decoding; Kubernetes does it for you.
I use spring boot project and deployed in Kubernetes, I would like to get URL of the pod,
I referred How to Get Current Pod in Kubernetes Java Application not worked for me.
There are different environments (DEV, QA, etc..) and wanted to get URL dynamically, is there anyway?
my service yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: test-service
namespace: default
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/metric: concurrency
# Disable scale to zero with a minScale of 1.
autoscaling.knative.dev/minScale: "1"
# Limit scaling to 100 pods.
autoscaling.knative.dev/maxScale: "100"
spec:
containers:
- image: testImage
ports:
- containerPort: 8080
is it possible to add
valueFrom:
fieldRef:
fieldPath: status.podIP
from url https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api
Your case is about Knative ingress, and not Kubernetes.
From the inbound network connectivity part of the runtime contract of the knative documentation:
In addition, the following base set of HTTP/1.1 headers MUST be set on
the request:
Host - As specified by RFC 7230 Section 5.4
Also, the following proxy-specific request headers MUST be set:
Forwarded - As specified by RFC 7239.
Look to the headers inside your request.
An example for servlet doGet method:
Enumeration<String> headerNames = request.getHeaderNames();
while(headerNames.hasMoreElements()) {
String paramName = (String)headerNames.nextElement();
out.print("<tr><td>" + paramName + "</td>\n");
String paramValue = request.getHeader(paramName);
out.println("<td> " + paramValue + "</td></tr>\n");
}
I am using spring-cloud-starter-kubernetes-all dependency for reading config map from my spring boot microservices and its working fine.
After modifying the config map i am using refresh endpoint
minikube servie list # to get the servive url
curl http://192.168.99.100:30824/actuator/refresh -d {} -H "Content-Type: application/json"
it working as expected and application loads configmap changes.
Issue
The above working fine if i have only 1 pod of my application but when i do use more that 1 pods only 1 pods picks the changes not all.
In below example only i pod picks the changes
[message-producer-5dc4b8b456-tbbjn message-producer] Say Hello to the World12431
[message-producer-5dc4b8b456-qzmgb message-producer] Say Hello to the World
minkube deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: message-producer
labels:
app: message-producer
spec:
replicas: 2
selector:
matchLabels:
app: message-producer
template:
metadata:
labels:
app: message-producer
spec:
containers:
- name: message-producer
image: sandeepbhardwaj/message-producer
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: message-producer
spec:
selector:
app: message-producer
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
configmap.yml
kind: ConfigMap
apiVersion: v1
metadata:
name: message-producer
data:
application.yml: |-
message: Say Hello to the World
bootstrap.yml
spring:
cloud:
kubernetes:
config:
enabled: true
name: message-producer
namespace: default
reload:
enabled: true
mode: EVENT
strategy: REFRESH
period: 3000
configuration
#ConfigurationProperties(prefix = "")
#Configuration
#Getter
#Setter
public class MessageConfiguration {
private String message = "Default message";
}
rbac
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default # "namespace" can be omitted since ClusterRoles are not namespaced
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
metadata:
name: service-reader
subjects:
- kind: User
name: default # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: service-reader
apiGroup: rbac.authorization.k8s.io
This is happening because when you hit curl http://192.168.99.100:30824/actuator/refresh -d {} -H "Content-Type: application/json" kubernetes will send that request to one of the pods behind the service via round robin load balancing.
You should use the property source reload feature by setting spring.cloud.kubernetes.reload.enabled=true. This will reload the property whenever there is a change in the config map hence you don't need to use the refresh endpoint.
I built a simple operator, by tweaking the memcached example. The only major difference is that I need two docker images in my pods. Got the deployment running. My test.yaml used to deploy with kubectl.
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
metadata:
name: "solar-demo"
spec:
size: 3
group: cache.example.com
names:
kind: Memcached
listKind: MemcachedList
plural: solar-demos
singular: solar-demo
scope: Namespaced
version: v1alpha1
I am still missing one piece though - load-balancing part. Currently, under Docker we are using the nginx image working as a reverse-proxy configured as:
upstream api_microservice {
server api:3000;
}
upstream solar-svc_microservice {
server solar-svc:3001;
}
server {
listen $NGINX_PORT default;
location /city {
proxy_pass http://api_microservice;
}
location /solar {
proxy_pass http://solar-svc_microservice;
}
root /html;
location / {
try_files /$uri /$uri/index.html /$uri.html /index.html=404;
}
}
I want my cluster to expose the port 8080 and forward to ports 3000 and 3001 to my images running inside Pods.
My deployment:
dep := &appsv1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1",
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: v1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Image: "shmukler/docker_solar-svc",
Name: "solar-svc",
Command: []string{"npm", "run", "start-solar-svc"},
Ports: []v1.ContainerPort{{
ContainerPort: 3001,
Name: "solar-svc",
}},
},
{
Image: "shmukler/docker_solar-api",
Name: "api",
Command: []string{"npm", "run", "start-api"},
Ports: []v1.ContainerPort{{
ContainerPort: 3000,
Name: "solar-api",
}},
},
},
},
},
}
What do I need to add have ingress or something running in front of my pods?
Thank you
What do I need to add have ingress or something running in front of my pods?
Yes, Ingress is designed for that kind of tasks.
Ingress has a path-based routing, which will be able to set up the same configuration as you mentioned in your example with Nginx. Moreover, one of the most popular implementations of Ingress is Nginx as a proxy.
Ingress is basically a set of rules that allows traffic, otherwise dropped or forwarded elsewhere, to reach the cluster services.
Here is an example of an Ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
spec:
rules:
- host: '' # Empty value means ‘any host’
http:
paths:
- path: /city
backend:
serviceName: myapp
servicePort: 3000
- path: /solar
backend:
serviceName: myapp
servicePort: 3001
Also, because a Pod is not a static thing, you should create a Service object which will be a static entry point of your application for Ingress.
Here is an example of the Service:
kind: Service
apiVersion: v1
metadata:
name: myapp
spec:
selector:
app: "NAME_OF_YOUR_DEPLOYMENT"
ports:
- name: city
protocol: TCP
port: 3000
targetPort: 3000
- name: solar
protocol: TCP
port: 3001
targetPort: 3001