I have an EKS cluster running with both Linux and Windows nodes. On the Windows nodes i am scheduling pods. They run for about 30 minutes and then get removed. The first thing any pod does is download some data from S3 using the AWS cli installed on it.
I am facing some intermittent connectivity issues. Pods get spun up on and sometimes give a fatal error:
Could not connect to the endpoint URL: "https://sts.eu-west-1.amazonaws.com
As far as i can see this only happens when I schedule more then one pod on a node. I do use a smaller instance type (M5.large) but i am not close to reaching the pod limit of this instance type. When there is 1 pod per node they can all connect and download data from S3.
Reading the documentation I can see it is possible to schedule more then 1 pod per EC2 instance. But I am unsure what the requirements are to the EC2 instance to give all those pods access to download data from S3. I did try to add more ENIs to the EC2 instances but this prevented the EC2 instances to be registered as nodes in the EKS cluster.
Related
Description
Hello, I have been following a tutorial that sets up my own microservice in the cloud with go micro and kubernetes.
The tutorial has a kubernetes cluster as a prerequisite, so I followed another tutorial by the same author to create a kubernetes cluster.
To sum up the tutorials so you may get the big picture:
I first used Hetzner Cloud to by some machines on a remote location so I can deploy my Rancher server there. Rancher is a UI tool for creating and managing a kubernetes cluster.
Therefore, I:
Bought a machine on Hetzner Cloud
Deployed my Rancher server there
Went to a public IP to log into Rancher
Made a kubernetes cluster with one master and one worker node.
Everything was successful there, I could download kube's .config and manipulate the cluster from the command line.
The next tutorial was on how you deploy go micro framework and your own helloworld microservice in the kubernetes cluster.
The tutorial walks you through deploying go micro's services first, and then shows you the deployment for your own microservice.
I managed to do everything and all of my services are up and running. There is just one problem, I can't log into the micro server with username: admin and password: micro.
Symptoms
What I can do:
I can list kubernetes pods with kubectl get pods -n micro
I can log into a particular pod (I logged into api like in tutorial) with kubectl exec -it -n micro {{pod}} -- bash
There I see the micro exec.
From there, the tutorial just said log in and execute ./micro services and it lists all the services, but I am unable to log in. When I try with the default "admin, micro" combination it says Invalid token provided.
I checked the jwt tokens in MICRO_AUTH_PRIVATE_KEY and MICRO_AUTH_PUBLIC_KEY and they match in every service.
I am able to create another user after which I get access denied to namespace error when trying to list the services. I am unable to create rules with that user.
Please help, this has been haunting me for days. 🙏🏽
I'm running multiple websites on Amazon AWS. I mounted and EBS on the master server, the mount dir hold the website's files.
Also, I configured the application load balancer, which installs small instances when there is a load on the master. The clone servers running NFS clients to connect to the master server and mount the website's files.
Everything works perfectly, but the issue many times the clones server cannot mount the NFS server even when I try to mount manually. I have to run exportfs -f to flush the NFS table on the master instance.
I do not know why this happens. If you need any further information just give the CMD for it.
As I understand You are trying to mount the EBS from multiple ec2 instances.
This can be done using the multi-attach capability of EBS. However the are some limitations to this capability (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html). So, in short, if You have more than 16 instanced trying to mount to this EBS, you hit the limit.
My suggestion to solve that - Use EFS instead. EFS is an elastic file system managed service by AWS. Really simple to use, can be mounted from multiple Linux instances and elastic (so you pay-as-you-grow). Check it here: https://docs.aws.amazon.com/efs/latest/ug/mount-multiple-ec2-instances.html
I have a 3 node kubernetes cluster, a master and two nodes on AWS that I created with kubeadm (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)
I have created some deployments from the master node and I can see that pods are created on the 2 nodes for each of the deployments.
But the issue is I can't access the pod ip from the master or from the other node. So the pod ip is only accessible on the node where is pod is running.
I have a service of nodeport type, so when the service(pod1:port) hits the other pod(pod2), it hangs and times out
Thanks.
It works either by disabling the firewall or by running below command.
I found this bug in my search. Looks like this is related to docker >=1.13 and flannel
refer: https://github.com/coreos/flannel/issues/799
I am using the amazon aws e2 to host a parse server database. It was working fine for the last couple of weeks, but today I got an error 503 saying: The server is temporarily unable to service your request due to maintenance downtime or capacity problems. My question is: is it because I'm using the free t2.micro tier and I have run out of quota? Or can there be some other problem? I just launched another instance and it seems to be working fine for now.
Have you set up a load balancer? Checkout Elastic Beanstalk, which manages EC2 instances to automatically spin servers up and down as your needs require it. Your server may have just crashed and nothign was set up to automatically redeploy it.
Newbie w/ etcd/zookeeper type services ...
I'm not quite sure how to handle cluster installation for etcd. Should the service be installed on each client or a group of independent servers? I ask because if I'm on a client, how would I query the cluster? Every tutorial I've read shows a curl command running against localhost.
For etcd cluster installation, you can install the service on independent servers and form a cluster. The cluster information can be queried by logging onto one of the machines and running curl or remotely by specifying the IP address of one of the cluster member node.
For more information on how to set it up, follow this article