Description
Hello, I have been following a tutorial that sets up my own microservice in the cloud with go micro and kubernetes.
The tutorial has a kubernetes cluster as a prerequisite, so I followed another tutorial by the same author to create a kubernetes cluster.
To sum up the tutorials so you may get the big picture:
I first used Hetzner Cloud to by some machines on a remote location so I can deploy my Rancher server there. Rancher is a UI tool for creating and managing a kubernetes cluster.
Therefore, I:
Bought a machine on Hetzner Cloud
Deployed my Rancher server there
Went to a public IP to log into Rancher
Made a kubernetes cluster with one master and one worker node.
Everything was successful there, I could download kube's .config and manipulate the cluster from the command line.
The next tutorial was on how you deploy go micro framework and your own helloworld microservice in the kubernetes cluster.
The tutorial walks you through deploying go micro's services first, and then shows you the deployment for your own microservice.
I managed to do everything and all of my services are up and running. There is just one problem, I can't log into the micro server with username: admin and password: micro.
Symptoms
What I can do:
I can list kubernetes pods with kubectl get pods -n micro
I can log into a particular pod (I logged into api like in tutorial) with kubectl exec -it -n micro {{pod}} -- bash
There I see the micro exec.
From there, the tutorial just said log in and execute ./micro services and it lists all the services, but I am unable to log in. When I try with the default "admin, micro" combination it says Invalid token provided.
I checked the jwt tokens in MICRO_AUTH_PRIVATE_KEY and MICRO_AUTH_PUBLIC_KEY and they match in every service.
I am able to create another user after which I get access denied to namespace error when trying to list the services. I am unable to create rules with that user.
Please help, this has been haunting me for days. 🙏🏽
Related
I am migrating my spring cloud eureka application to AWS ECS and currently having some trouble doing so.
I have an ECS cluster on AWS in which two EC2 services was created
Eureka-server
Eureka-client
each service has a Task running on it.
QUESTION:
how do i establish a "docker network" amongst these two services such that i can register my eureka-client to the eureka-server's registry? Having them in the same cluster doesn't seem to do the trick.
locally i am able to establish a "docker network" to achieve this task. is it possible to have a "docker network" on AWS?
The problem here lies on the way how ECS clusters work. If you go to your dashboard and check out your task definition, you'll see an ip address which AWS assigns to the resource automatically.
In Eureka's case, you need to somehow obtain this ip address while deploying your eureka client apps and use it to register to your eureka-server. But of course your task definitions gets destroyed and recreated again somehow so you easily lose it.
I've done this before and there are couple of ways to achieve this. Here is one of the ways:
For the EC2 instances that you intend to spread ECS tasks as eureka-server or registry, you need to assign Elastic IP Addresses so you always know where to connect to in terms of a host ip address.
You also need to tag them properly so you can refer them in the next step.
Then switching back to ECS, when deploying your eureka-server tasks, inside your task definition configuration, there's an argument as placement_constraint
This will allow you to add a tag to your tasks so you can place those in the instances you assigned elastic ip addresses in the previous steps.
Now if this is all good and you deployed everything, you should be able to refer your eureka-client apps to that ip and have them registered.
I know this looks dirty and kind of complicated but the thing is Netflix OSS project for Eureka has missing parts which I believe is their proprietary implementation for their internal use and they don't want to share.
Another and probably a cooler way of doing this is using a Route53 domain or alias record for your instances so instead of using an elastic ip, you can also refer them using a DNS.
I have a simple one master (Ubuntu 1604), one worker (Windows Server 1803) Kubernetes cluster running in AWS. I am using Flannel for networking.
I have been able to deploy windows containers using kubectl from the master without issue. Deploying multiple pods shows they are able to talk to each other. But I am not able to ping or curl the pods from even the Kubernetes windows node host, or from the open internet. Also, the pods are not able to communicate with the outside internet either. (Can't curl external DNS names or even IP addresses.)
Side note: Deploying the same image directly with Docker on the Windows node is able to connect to the internet and be accessed over the internet.
I used the following setup from Microsoft, which uses kubeadm, flannel and scripts from Microsoft SDN repo.
https://onedrive.live.com/view.aspx?resid=E2B6765015E5FA01!339&ithint=file%2cdocx&app=Word&authkey=!AGvs_s_hWs7xHGs
It is my understanding that on Windows the host network interface is not connected to the Kubernetes network interface by default, but the Docker network uses the default interface. Which might be why docker deployments can be accessed but Kubernetes deployments cannot.
However, I haven't found info on connecting these networks when using Flannel for pod communication on Windows.
I can add any logs or config info that anyone thinks is useful.
Any thoughts? Thanks for your help!
More Details:
I am looking into this: https://unofficial-kubernetes.readthedocs.io/en/latest/getting-started-guides/windows/ which describes connecting network interfaces between the Windows default and Kubernetes, but does not seem to rely on the same Flannel Host-GW model I used to set this up.
There is a Consul cluster in my local environment, and some developers' local machines as well. Each developer has a Tomcat server which runs some web artifacts in Docker container, so I want to register these artifacts as services on Tomcat deploy.
Assuming that we have already registered empty node for each developer's local machine, how can i register/deregister a new service on existing node? Do i need consul agent running on any node?
I know it's possible to add service when registering node, but haven't found any info about how to add services to node dynamically. I'd prefer HTTP API if possible (it's much easier to run on local machines).
Do i need consul agent running on any node?
Yes, even though you can add external services to a remote machine using curl post too, the service discovery is going to benifit you with the agent running on nodes too.
I know it's possible to add service when registering node, but haven't found any info about how to add services to node dynamically.
Registering a service is fairly easy on consul and you can find more details at the following link:
https://www.consul.io/intro/getting-started/services.html
However, if you wish to give better isolation to your developers, I would recommend running the consul agent server/client in docker and let registrator take care of everything.
Registrator from gliderlabs is service registry bridge for Docker. It automatically registers and deregisters services for any Docker container by inspecting containers as they come online.
You can find more details here: https://github.com/gliderlabs/registrator
Assume the following stack:
A dedicated server
The server is running Vagrant
Vagrant is running 2 virtual machines master + minion-1 (Kubernetes)
minion-1 is running a pod
Within the pod is 2 containers: webservice and fileservice
Both webservice and fileservice should be accessible from internet i.e. from outside. Either by web.mydomain.com - file.mydomain.com or www.mydomain.com/web/ - www.mydomain.com/file/
Before using Kubernetes, I was using a remote proxy (HAproxy) and simply mapped domain names to an internal ip / port.
Now with Kubernetes, I can imagine there is something dedicated to this task but I honestly have no clue from where to start.
I read about "createExternalLoadBalancer", kubernetes Services and kube-proxy. Should a reverse-proxy still be put somewhere (before vagrant or within a pod ?) also is using Vagrant a good option for production (staying in the scope of this question) ?
The easiest thing for you to do at the moment is to make a service of type "nodePort", and to configure your HAproxy to point at minion-1:.
createExternalLoadBalancer is the old, less flexible, way to do this--it requires the cloud provider to do work. Type=nodePort doesn't require anything special from the cloud provider.
I am playing a little with Docker and Consul and i have a couple of questions regarding agent-service mapping especially in docker environment. Assume i have a service name "myGreatService" being simple web nodejs helloworld application encapsulated with docker image named "myGreatServiceImage". From Consul docs i did understand that when you register a service (through HTTP or service definition file) than service is about to be "wired" to agent/consul node (the wired node can be retrieved via /v1/catalog/service/). So if a consul node is down (or node health check decided it is down) than all services "wired" to that consule node will automatically be marked as down. Am i right ?
If i run my GreatServiceImage image multiple times on a single host via docker (resulting of multiple instances of "myGreatService" service)
how many agents shall I run ?
A single per host managing all containers (all service instances) on that host? Or maybe a separate agent for each container (service instance) ?
If a health check for a service fails then the service will be marked as down and won't show up if you do a DNS query for that service
dig #localhost -p 8500 apache.service.consul
If you do a call to the api you will see that the service is still listed. This is because the service is not removed, it is just marked as down. If you would do an api call to check the health of that service it would be shown as down.
curl localhost/v1/catalog/service/apache
curl localhost/v1/health/service/apache
You can add the ?passing flag to that last call to recieve only the healthy services. (just like the dns query)
curl localhost/v1/health/service/apache?passing
If the consul agent on the host fails then all services running on that host won't show up if you query consul for the services. (either via a dns query or via the api).
As for the number of agents you should be running: Run one consul agent per host. Let your services register themselves via the api of your local consul agent. (or preconfigure all your services in the config files, but I recommend you to make this a dynamic process of self registering)