Ambassador Edge Stack : Working with sample project but not with my project - api-gateway

I am trying to configure Ambassador as API Gateway in my kubernates cluster locally.
Installation:
installed from https://www.getambassador.io/docs/latest/tutorials/getting-started/ both windows and Kubernetes part
can login with >edgectl login --namespace=ambassador localhost and see dashboard
configure with a sample project they provide from https://www.getambassador.io/docs/latest/tutorials/quickstart-demo/
Here is the YML file for deployment of demo app
apiVersion: apps/v1
kind: Deployment
metadata:
name: quote
namespace: ambassador
spec:
replicas: 1
selector:
matchLabels:
app: quote
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: quote
spec:
containers:
- name: backend
image: docker.io/datawire/quote:0.4.1
ports:
- name: http
containerPort: 8080
Everything is working as expected. Now I am trying to configure with my project. But it is not working.
So for simpler case, with keeping every configuration as the demo of Ambassador, I just modify from image: docker.io/datawire/quote:0.4.1 to image: angularapp:latest where this is a docker image of Angular 10 project.
But I am getting upstream connect error or disconnect/reset before headers. reset reason: connection failure
I spent one day with this problem. I restored my Kubernates from docker desktop app and reconfigured but no luck.

That error occurs when a mapping is valid, but the service it is pointing to cannot be reached for some reason. Is the deployment actually running (kubectl get deploy -A -o wide)? Is your angular app exposing port 8080? 8080 is a pretty common kubernetes port, but not so much in the frontend development world. If you use kubectl exec -it {{AMBASSADOR_POD}} -- sh does curl http://quote return the expected output?

Related

Record Kubernetes container resource utilization data

I'm doing a perf test for web server which is deployed on EKS cluster. I'm invoking the server using jmeter with different conditions (like varying thread count, payload size, etc..).
So I want to record kubernetes perf data with the timestamp so that I can analyze these data with my jmeter output (JTL).
I have been digging through the internet to find a way to record kubernetes perf data. But I was unable to find a proper way to do that.
Can experts please provide me a standard way to do this??
Note: I have a multi-container pod also.
In line with #Jonas comment
This is the quickest way of installing Prometheus in you K8 cluster. Added Details in the answer as it was impossible to put the commands in a readable format in Comment.
Add bitnami helm repo.
helm repo add bitnami https://charts.bitnami.com/bitnami
Install helmchart for promethus
helm install my-release bitnami/kube-prometheus
Installation output would be:
C:\Users\ameena\Desktop\shine\Article\K8\promethus>helm install my-release bitnami/kube-prometheus
NAME: my-release
LAST DEPLOYED: Mon Apr 12 12:44:13 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
Watch the Prometheus Operator Deployment status using the command:
kubectl get deploy -w --namespace default -l app.kubernetes.io/name=kube-prometheus-operator,app.kubernetes.io/instance=my-release
Watch the Prometheus StatefulSet status using the command:
kubectl get sts -w --namespace default -l app.kubernetes.io/name=kube-prometheus-prometheus,app.kubernetes.io/instance=my-release
Prometheus can be accessed via port "9090" on the following DNS name from within your cluster:
my-release-kube-prometheus-prometheus.default.svc.cluster.local
To access Prometheus from outside the cluster execute the following commands:
echo "Prometheus URL: http://127.0.0.1:9090/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-prometheus 9090:9090
Watch the Alertmanager StatefulSet status using the command:
kubectl get sts -w --namespace default -l app.kubernetes.io/name=kube-prometheus-alertmanager,app.kubernetes.io/instance=my-release
Alertmanager can be accessed via port "9093" on the following DNS name from within your cluster:
my-release-kube-prometheus-alertmanager.default.svc.cluster.local
To access Alertmanager from outside the cluster execute the following commands:
echo "Alertmanager URL: http://127.0.0.1:9093/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-alertmanager 9093:9093
Follow the commands to forward the UI to localhost.
echo "Prometheus URL: http://127.0.0.1:9090/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-prometheus 9090:9090
Open the UI in browser: http://127.0.0.1:9090/classic/graph
Annotate the pods for sending the metrics.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 4 # Update the replicas from 2 to 4
template:
metadata:
labels:
app: nginx
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9102'
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
In the ui put appropriate filters and start observing the crucial parameter such as memory CPU etc. UI supports autocomplete so it will not be that difficult to figure out things.
Regards

upstream connect error or disconnect/reset before headers. reset reason: connection failure. Spring Boot and java 11

I'm having a problem migrating my pure Kubernetes app to an Istio managed. I'm using Google Cloud Platform (GCP), Istio 1.4, Google Kubernetes Engine (GKE), Spring Boot and JAVA 11.
I had the containers running in a pure GKE environment without a problem. Now I started the migration of my Kubernetes cluster to use Istio. Since then I'm getting the following message when I try to access the exposed service.
upstream connect error or disconnect/reset before headers. reset reason: connection failure
This error message looks like a really generic. I found a lot of different problems, with the same error message, but no one was related to my problem.
Bellow the version of the Istio:
client version: 1.4.10
control plane version: 1.4.10-gke.5
data plane version: 1.4.10-gke.5 (2 proxies)
Bellow my yaml files:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
account: tree-guest
name: tree-guest-service-account
---
apiVersion: v1
kind: Service
metadata:
labels:
app: tree-guest
service: tree-guest
name: tree-guest
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: tree-guest
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tree-guest
version: v1
name: tree-guest-v1
spec:
replicas: 1
selector:
matchLabels:
app: tree-guest
version: v1
template:
metadata:
labels:
app: tree-guestaz
version: v1
spec:
containers:
- image: registry.hub.docker.com/victorsens/tree-quest:circle_ci_build_00923285-3c44-4955-8de1-ed578e23c5cf
imagePullPolicy: IfNotPresent
name: tree-guest
ports:
- containerPort: 8080
serviceAccount: tree-guest-service-account
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tree-guest-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tree-guest-virtual-service
spec:
hosts:
- "*"
gateways:
- tree-guest-gateway
http:
- match:
- uri:
prefix: /v1
route:
- destination:
host: tree-guest
port:
number: 8080
To apply the yaml file I used the following argument:
kubectl apply -f <(istioctl kube-inject -f ./tree-guest.yaml)
Below the result of the Istio proxy argument, after deploying the application:
istio-ingressgateway-6674cc989b-vwzqg.istio-system SYNCED SYNCED SYNCED SYNCED
istio-pilot-ff4489db8-2hx5f 1.4.10-gke.5 tree-guest-v1-774bf84ddd-jkhsh.default SYNCED SYNCED SYNCED SYNCED istio-pilot-ff4489db8-2hx5f 1.4.10-gke.5
If someone have a tip about what is going wrong, please let me know. I'm stuck in this problem for a couple of days.
Thanks.
As #Victor mentioned the problem here was the wrong yaml file.
I solve it. In my case the yaml file was wrong. I reviewed it and the problem now is solved. Thank you guys., – Victor
If you're looking for yaml samples I would suggest to take a look at istio github samples.
As 503 upstream connect error or disconnect/reset before headers. reset reason: connection failure occurs very often I set up little troubleshooting answer, there are another questions with 503 error which I encountered for several months with answers, useful informations from istio documentation and things I would check.
Examples with 503 error:
Istio 503:s between (Public) Gateway and Service
IstIO egress gateway gives HTTP 503 error
Istio Ingress Gateway with TLS termination returning 503 service unavailable
how to terminate ssl at ingress-gateway in istio?
Accessing service using istio ingress gives 503 error when mTLS is enabled
Common cause of 503 errors from istio documentation:
https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes
https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule
https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications
Few things I would check first:
Check services ports name, Istio can route correctly the traffic if it knows the protocol. It should be <protocol>[-<suffix>] as mentioned in istio
documentation.
Check mTLS, if there are any problems caused by mTLS, usually those problems would result in error 503.
Check if istio works, I would recommend to apply bookinfo application example and check if it works as expected.
Check if your namespace is injected with kubectl get namespace -L istio-injection
If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.
I landed exactly here with exactly similar symptoms.
But in my case I had to
switch pod listen address from 172.0.0.1 to 0.0.0.0
which solved my issue

Running a bash script using a Kubenetes Service

I am not sure how dumb or un-reasonable this question is, but we are trying to see if we can do this in any way.
I have a .bash file. And I want to run this when I invoke a url.
Let's take the url is https://domainname.com/jobapi
When I invoke this on the browser, this should invoke a .bash script on the container.
Is this really possible?
If it is possible, want to know if I need to add this script as a deployment or a job?
The first step, before looking at Kubernetes, is to configure a web server to run your script. This could be a generic web server like nginx or Apache, and you could add your script as a CGI script. There's plenty of tutorials out there that explain how to write CGI scripts.
Depending on the requirements of your application, a simple HTTP hook server might be a better match. Have a look at, for example, https://github.com/adnanh/webhook.
Either way, try this out with just Docker first, before trying to create a pod and potentially a service and an ingress in Kubernetes.
In a second step, to be able to access your service (the server invoking your script), you need to create a pod, probably through a deployment, and potentially a service and an ingress for it.
Kubernetes jobs are for running a script (or other program) once. They're most useful to automate maintaince tasks for your application.
What I would try to do is to run the shell script from a php file. Otherwise you are going to need some sort of driver to trigger the script.
So you would have the script as a regular executable, and upon the request php will execute it via shell.
Actually you can make it like an API; domain.com/job1 could execute job1, domain.com/jobn could execute jobn and so on.
Now, the way I'm describing would work only as a Deployment, as you want the server to be always up and ready to get requests.
Create a ingress service (NodePort if external facing) which will call a service
Call a service which maps the labels defined on the pod(runs script).This pod can be from a deployment or a simple pod
Make this service expose a pod/deployment
deployment can trigger the pods with shell script or a pod can trigger a shell script as well.
Ingress service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "domainame.com"
http:
paths:
- path: /jobapi
pathType: Prefix
backend:
serviceName: my-service
servicePort: 8080
my-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
nodePort: 30007
port: 8080
targetPort: 8080
run your bash script, this can be done by defining a deployment or a pod
Pod:
k run MyApp --image=nginx --labels=app=MyApp --port=8080 -- /bin/sh -c echo 'Im up'
or
Deployment.yaml
controllers/nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: MyAppdep
spec:
replicas: 2
selector:
matchLabels:
app: MyApp
template:
metadata:
labels:
app: MyApp
spec:
containers:
- name: nginx
image: nginx:1.14.2
commands: ["/bin/sh","-c","echo 'test'"]
ports:
- containerPort: 8080

How to deploy a simple Hello World program to local Kubernetes cluster

I have a very simple spring-boot Hello World program. When I run the application locally, I can navigate to http://localhost:8080/ and see the "Hello World" greeting displayed on the page. I have also created a Dockerfile and can build an image from it.
My next goal is to deploy this to a local Kubernetes cluster. I have used Docker Desktop to create a local kubernetes cluster. I want to create a deployment for my application, host it locally on the cluster, and access it from a browser.
I am not sure where to start with this deployment. I know that I will need to create charts, but I have no idea how to ultimately push this image to my cluster...
You need to create a kubernetes deployment and service definitions respectively.
These definitions can be in json or yaml format. Here is example definitions, you can use these definitions as template for your deploy.
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-very-first-deployment
labels:
app: first-deployment
spec:
replicas: 1
selector:
matchLabels:
app: first-deployment
template:
metadata:
labels:
app: first-deployment
spec:
containers:
- name: your-app
image: your-image:with-version
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: your-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30180
targetPort: 8080
selector:
app: first-deployment
Do not forget to update image line in deployment yaml with your image name and image version. After that replacement, save this file with name for example deployment.yaml and then apply this definition with kubectl apply -f deployment.yml command.
Note that, you need to use port 30180 to access your application as it is stated in service definition as nodePort value. (http://localhost:30180)
Links:
Kubernetes services: https://kubernetes.io/docs/concepts/services-networking/service/
Kubernetes deployments: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
you need to define deployment first to start , define docker image and required environment in deployment.

Kubernetes Pod container's websocket not reachable

I've created a sample spring boot application that exposes a websocket endpoint at localhost:8080/ws.
Basically I followed this guide except for I am not using the .withSockJS Option.
When I run this application locally, my sample angular app can connect to the websocket.
Now I want to have both containers (spring boot app and angular app) in a single Kubernetes pod.
They both spin up when I run them. Then I expose the angular frontend's port to be able to view the app. But the logs tell me that it is not able to connect to the websocket backend via ws://localhost:8080/ws
Even when I connect to the backend container, I can see that it is up and running, but my curl websocket test also always fails.
This is my pod def:
---
apiVersion: v1
kind: Pod
metadata:
name: my-app.example.org
labels:
app: my-app-system
spec:
containers:
- name: backend
image: test/my-app-backend
ports:
- containerPort: 8080
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev-docker-postgres"
- name: JAVA_OPTIONS
value: "-agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n"
- name: frontend
image: test/my-app-frontend
ports:
- containerPort: 4200
imagePullPolicy: Always
command: ["/bin/sh"]
args: ["-c", "npm run kubstart"]
imagePullSecrets:
- name: registrykey
One more thing:
When I additionally expose the backend container's port via NodePort type, and start the angular app locally on my machine with the service's url the websocket connection succeeds.
It seems I am not able to let both containers in my pod communicate with each other via ws://
Never mind...
Of cource this can't work via localhost.
I need to expose the backend's port and access the websocket "from outside"

Resources