I actually use traefik with docker and everything works fine.
But now, i would like to create new frontend rule with my apps using consul as a KV store.
So i created some key:
/traefik/frontends/frontend1/backend backend2
/traefik/frontends/frontend1/routes/test_1/rule Host:test.localhost
And hope to see them in my traefik UI but nothing appeared.
Looking in logs, i can see
time="2018-04-26T19:31:06Z" level=debug msg="Configuration received from provider consul: {}"
time="2018-04-26T19:31:06Z" level=info msg="Skipping same configuration for provider consul"
Connection with consul is okay and i saw some logs Cannot get key... so i created them to see if this was causing a bug
Have you an idea?
Thanks
Are you missing a backend? In my testing, the bare minimum to get it to appear in the UI is the following:
traefik/frontends/frontend-test/backend backend-test
traefik/backends/backend-test
This is not enough however to have a working frontend/backend, but should get something in the UI.
Related
I’m using strapi v4 along with the prometheus plugin and right now my app metrics are being exposed on http://localhost:1337/api/metrics
But I need it to be on another port like http://localhost:9090/metrics (also removing the api prefix).
So strapi and the rest of the backend would still be running on port 1337 and only the metrics on 9090
I've been through the documentation but it seems like there is no configuration for that. Can anybody help think of a way to do this?
Right now the metrics don't run on a separate server. They run on the same server as strapi.
This is to be able to use the user-permissions plugin and the API token of strapi to secure the endpoint.
In the future, I could look into making it an option to create a separate server.
I'm using traefik v2 as gateway. I have a frontend container running with host https://some.site.com which powered by traefik.
Now I have a micro-service server with multi services and all of them are listening on 80 port. I want to serve the backend server on path https://some.site.com/api/service1, https://some.site.com/api/service2 ...
I have tried traefik.http.routers.service1.rule=(Host(some.site.com) && PathPrefix(/api/service1)) but not worked and traefik.http.middlewares.add-api.addprefix.prefix=/api/service1 not worked too;
How can I implement this?
Can you post your services' docker-compose configuration?
If you use middlewares, you may need to specify the service. Like
traefik.http.routers.service1.middlewares=add-api
traefik.http.middlewares.add-api.addprefix.prefix=/api/service1
I deployed a RabbitMQ server on my kubernetes cluster and i am able to access the management UI from the browser. But my spring boot app cannot connect to port 5672 and i get connection refused error. The same code works , if i replace my application.yml properties from kuberntes host to localhost and run a docker image on my machine.I am not sure what i am doing wrong?
Has anyone tried this kind of setup.
Please help. Thanks!
Let's say the dns is named rabbitmq. If you want to reach it, then you have to make sure that rabbitmq's deployment has a service attached with the correct ports for exposure. So you would target http://rabbitmq:5672.
To make sure this or something alike exists you can debug k8s services. Run kubectl get services | grep rabbitmq to make sure the service exists. If it does, then get the service yaml by running 'kubectl get service rabbitmq-service-name -o yaml'. Finally, check spec.ports[] for the ports that allow you to connect to the pod. Search for '5672' in spec.ports[].port for amqp. In some cases, the port might have been changed. This means spec.ports[].port might be 3030 for instance, but spec.ports[].targetPort be 5672.
Do you are exposing TCP port of rabbitMQ to outside of cluster?
Maybe only management port has exposed.
If you can connect to management UI, but not on port 5672, maybe indicate that your 5672 port is not exposed outside of cluster.
Obs: if I not understand correctly your question, please let me know.
Good luck
Using Kubernetes 1.12.6-gke.7 or higher it is possible to create a ManagedCertificate which is then referenced from an Ingress Resource exposing a Service to the Internet.
Running kubectl describe managedcertificate certificate-name first indicates the certificate is in a Provisioning state but eventually goes to FailedNotVisible.
Despite using a Static IP and DNS that resolves fine to the http version of said service all ManagedCertificate's end up in a "Status: FailedNotVisible" state.
Outline of what I am doing:
Generating a reserved (static) external IP Address
Configuring DNS A record in CloudDNS to subdomain.domain.com to generated IP address from step 1.
Creating a ManagedCertificate named "subdomain-domain-certificate" with kubectl apply -f with spec:domains containing a single domain corresponding to subdomain.domain.com DNS record in step 2.
Creating a simple deployment and service exposing it
Creating Ingress resource referring to default backend of service in step 4 as well as annotations for static ip created in step 1 and managed certificate generated in step 3.
Confirm that Ingress is created and is assigned static IP
Visiting http://subdomain.domain.com serves the output from pod created in deployment in step 4
After a little while
kubectl describe managedcertificate subdomain-domain-certificate
results in "Status: FailedNotVisible".
Name: subdomain-domain-certificate
Namespace: default
Labels: <none>
Annotations: <none>
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-04-15T17:35:22Z
Generation: 1
Resource Version: 52637
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/subdomain-domain-certificate
UID: d8e5a0a4-5fa4-11e9-984e-42010a84001c
Spec:
Domains:
subdomain.domain.com
Status:
Certificate Name: mcrt-ac63730e-c271-4826-9154-c198d654f9f8
Certificate Status: Provisioning
Domain Status:
Domain: subdomain.domain.com
Status: FailedNotVisible
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Create 56m managed-certificate-controller Create SslCertificate mcrt-ac63730e-c271-4826-9154-c198d654f9f8
From what I understand if the Load Balancer is configured correctly (done under the hood in the ManagedCertificate resource) and the DNS (which resolves fine to the non https endpoint) checks out the certificate should go in to a Status: Active state?
The issue underlying my problem ended up being a DNSSEC misconfiguration. After running the DNS through https://dnssec-analyzer.verisignlabs.com/ I was able to identify and fix the issue.
DNSSEC was indeed not enabled for my domain but after configuring that, the ManagedCertificate configuration was still not going through and I had no clue what was going on. Deleting and re-applying the ManagedCertificate and Ingress manifests did not do the trick. But issuing the command gcloud beta compute ssl-certificates list showed several unused managed certificates hanging around and deleting them with cloud compute ssl-certificates delete NAME ..., and then restarting the configuration process did the trick in my case.
You need to make sure the domain name resolves to the IP address of your GKE Ingress, following the directions for "creating an Ingress with a managed certificate" exactly.
For more details, see the Google Cloud Load Balancing documentation. From https://cloud.google.com/load-balancing/docs/ssl-certificates#domain-status:
"The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer."
I just ran into this problem when I was setting up a new service and my allowance of 8 external IPs was used up.
Following the trouble shooting guide, I checked whether there was a forwarding rule for port 443 to my ingress.
There wasn't.
When I tried to set it up manually, I got an error telling me I used up my 8 magic addresses.
I deleted forwarding rules I didn't need et voila!
Now, why the forwarding rule for port 80 was successfully set up for the same ingress is beyond me.
I ran across this same error and found that I had created the managedCertificate in the wrong Kubernetes namespace. Once the managedCertificate was placed in the correct namespace everything worked.
After reading the trouble shooting guide, I still wasn't able to resolve my issue. When I checked the GCP ingress events, it showed that the ingress could not locate the SSL policy. Check if you missed something when creating the ingress.
and this is another reference useful to verify your k8s manifests to set up the managed certificate and ingress. Hope it helps someone.
Does anyone know if Google’s HTTPS loadbalancer is working?
I was working on setting up a NGINX ingress service but I noticed the Google Loadbalancer was automatically being setup by Kubernetes. I was getting two external IPs instead of one. So instead of setting up the NGINX load balancer I decided to use the Google service. I deleted my container cluster, created a brand new one. I started my HTTP pod and HTTP service on port 80. I then created my ingress service and L7 controller pod. Now I'm getting the following error when I review the load balancer logs:
Event(api.ObjectReference{Kind:"Ingress", Namespace:"default",
Name:"echomap", UID:"9943e74c-76de-11e6-8c50-42010af0009b",
APIVersion:"extensions", ResourceVersion:"7935", FieldPath:""}): type:
'Warning' reason: 'GCE' googleapi: Error 400: Validation failed for
instance
'projects/mundolytics/zones/us-east1-c/instances/gke-airportal-default-pool-7753c577-129e':
instance may belong to at most one load-balanced instance group.,
instanceInMultipleLoadBalancedIgs
Probably you have one or more hanging backend services. Run gcloud compute backend-services list to find them and then gcloud compute backend-services delete [SERVICE-NAME] for each service to remove it.
$ gcloud compute backend-services list
NAME BACKENDS PROTOCOL
my-hanging-service us-central1-a/instanceGroups/gke-XXXXXXX-default-pool-XXXXXXX-grp HTTP
$ gcloud compute backend-services delete my-hanging-service