I want to run https://hub.docker.com/r/jboss/infinispan-server/ over ECS as a Service. This container has the following ports open 7600 8080 8181 8888 9990 11211 11222 57600. I want to access all these ports by connecting ALB with the target groups. I know 11222 accessible from HTTP, but I don't know how to use other ports. Could someone please help me to understand how I can do that.
For example:
infinispan.mydomain.com -> 11222 (HTTP Infispan App)
infinispan9990.mydomain.com -> 9990
etc.....
How can I use all those ports in the target groups for health checks? If not, then how can I use infinispan and all its ports from another service.
I know there is an implementation over EKS but I want to use ECS. I tried it already but I didn't find any good article or way to do that.
The latest Infinispan images (10+) have been designed to run with the infinispan-operator, so Infinispan on k8s or OKD is the first option.
Maybe you can find more info on how to adapt the image on ECS in the following pages:
Infinispan Image project
ConfigGenerator for Infinispan
Related
Should I run consul slaves alongside nomad slaves or inside them?
The later might not make sense at all but I'm asking it just in case.
I brought my own nomad cluster up with consul slaves running alongside nomad slaves (inside worker nodes), my deployable artifacts are docker containers (java spring applications).
The issue with my current setup is that my applications can't access consul slaves (to read configurations) (none of 0.0.0.0, localhost, worker node ip worked)
Lets say my service exposes 8080, I configured docker part (in hcl file) to use bridge as network mode. Nomad maps 8080 to 43210.
Everything is fine until my service tries to reach the consul slave to read configuration. Ideally giving nomad worker node IP as consul host to Spring should suffice. But for some reason it's not.
I'm using latest version of nomad.
I configured my nomad slaves like https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/client1.hcl
And the link below shows how I configured/ran my consul slave:
https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml
Note: if I use static port mapping and host as the network mode for docker (in nomad) I'll be fine but then I can't deploy more than one instance of each application in each worker node (due to port conflic)
Nomad jobs listen on a specific host/port pair.
You might want to ssh into the server and run docker ps to see what host/port pair the job is listening on.
a93c5cb46a3e image-name bash 2 hours ago Up 2 hours 10.0.47.2:21435->8000/tcp, 10.0.47.2:21435->8000/udp foo-bar
Additionally, you will need to ensure that the consul nomad job is listening on port 0.0.0.0, or the specific ip of the machine. I believe that is this config value: https://www.consul.io/docs/agent/options.html#_bind
All those will need to match up in order to consul to be reachable.
More generally, I might recommend: if you're going to run consul with nomad, you might want to switch to host networking, so that you don't have to deal with the specifics of the networking within a container. Additionally, you could schedule consul as a system job so that it is automatically present on every host.
So I managed to solve the issue like this:
nomad.job.group.network.mode = host
nomad.job.group.network.port: port "http" {}
nomad.job.group.task.driver = docker
nomad.job.group.task.config.network_mode = host
nomad.job.group.task.config.ports = ["http"]
nomad.job.group.task.service.connect: connect { native = true }
nomad.job.group.task.env: SERVER_PORT= "${NOMAD_PORT_http}"
nomad.job.group.task.env: SPRING_CLOUD_CONSUL_HOST = "localhost"
nomad.job.group.task.env: SPRING_CLOUD_SERVICE_REGISTRY_AUTO_REGISTRATION_ENABLED = "false"
Running consul agent (slaves) using docker-compose alongside nomad agent (slave) with host as network mode + exposing all required ports.
Example of nomad job: https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/location-update-publisher.hcl
Example of consul agent config (docker-compose file): https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml
Disclaimer: The LAB is part of Cluster Visualization Framework called: LiteArch Trafik which I have created as an interesting exercise to understand Nomad and Consul.
It took me long time to shift my mind from K8S to Nomad and Consul,
Integration them was one of my effort I spent in the last year.
When service resolution doesn't work, I found out it's more or less the DNS configuration on servers.
There is a section for it on Hashicorp documentation called DNS Forwarding
Hashicorp DNS Forwarding
I have created a LAB which explains how to set up Nomad and Consul.
But you can use the LAB seperately.
I created the LAB after learning the hard way how to install the cluster and how to integrate Nomad and Consul.
With the LAB you need Ubuntu Multipass installed.
You execute one script and you will get full functional Cluster locally with three servers and three nodes.
It shows you as well how to install docker and integrate the services with Consul and DNS services on Ubuntu.
After running the LAB you will get the links to Nomad, Fabio, Consul.
Hopefully it will guide you through the learning process of Nomad and Consul
LAB: LAB
Trafik:Trafik Visualizer
I have a kubernetes cluster running on GKE and a Jenkins server running on a GCP instance.
I am using the Kubernetes plugin to dynamically create pods on the kubernetes cluster. I created a pipeline(Declarative syntax) for the same.
So I am aware that the Jenkins slave agents communicates with the Jenkins master on port 50000.
A snip of the configuration
But for some reason when I viewed the logs for the JNLP container creates by Jenkins, I received an exception - tcpSlaveAgentListener not found.
A snip of the container log
According to the above image, I assume the tunneling is unsuccessful as it is trying to connect to http://34.90.46.204:8080/tcpSlaveAgentListener/ whereas it should connect to http://34.90.46.204:50000/tcpSlaveAgentListener/.
It was a lazy question for me to ask, but I solved the issue.
In the Manage Jenkins-> Configure Global Security settings:
For the option on setting a port for TCP inbound agents: unselect the disable option which is selected by default and then provide a port for the inbound agents to interact on (50000).
A snip of the configuration
Jenkins uses a TCP port to communicate with agents connected inbound. If you're going to use inbound agents, you can allow the system to randomly select a port at launch (this avoids interfering with other programs, including other Jenkins instances). As it's hard for firewalls to secure a random port, you can instead specify a fixed port number and configure your firewall accordingly.
Hope this helps someone.
I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac
I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.
I deployed a RabbitMQ server on my kubernetes cluster and i am able to access the management UI from the browser. But my spring boot app cannot connect to port 5672 and i get connection refused error. The same code works , if i replace my application.yml properties from kuberntes host to localhost and run a docker image on my machine.I am not sure what i am doing wrong?
Has anyone tried this kind of setup.
Please help. Thanks!
Let's say the dns is named rabbitmq. If you want to reach it, then you have to make sure that rabbitmq's deployment has a service attached with the correct ports for exposure. So you would target http://rabbitmq:5672.
To make sure this or something alike exists you can debug k8s services. Run kubectl get services | grep rabbitmq to make sure the service exists. If it does, then get the service yaml by running 'kubectl get service rabbitmq-service-name -o yaml'. Finally, check spec.ports[] for the ports that allow you to connect to the pod. Search for '5672' in spec.ports[].port for amqp. In some cases, the port might have been changed. This means spec.ports[].port might be 3030 for instance, but spec.ports[].targetPort be 5672.
Do you are exposing TCP port of rabbitMQ to outside of cluster?
Maybe only management port has exposed.
If you can connect to management UI, but not on port 5672, maybe indicate that your 5672 port is not exposed outside of cluster.
Obs: if I not understand correctly your question, please let me know.
Good luck
I am migrating my spring cloud eureka application to AWS ECS and currently having some trouble doing so.
I have an ECS cluster on AWS in which two EC2 services was created
Eureka-server
Eureka-client
each service has a Task running on it.
QUESTION:
how do i establish a "docker network" amongst these two services such that i can register my eureka-client to the eureka-server's registry? Having them in the same cluster doesn't seem to do the trick.
locally i am able to establish a "docker network" to achieve this task. is it possible to have a "docker network" on AWS?
The problem here lies on the way how ECS clusters work. If you go to your dashboard and check out your task definition, you'll see an ip address which AWS assigns to the resource automatically.
In Eureka's case, you need to somehow obtain this ip address while deploying your eureka client apps and use it to register to your eureka-server. But of course your task definitions gets destroyed and recreated again somehow so you easily lose it.
I've done this before and there are couple of ways to achieve this. Here is one of the ways:
For the EC2 instances that you intend to spread ECS tasks as eureka-server or registry, you need to assign Elastic IP Addresses so you always know where to connect to in terms of a host ip address.
You also need to tag them properly so you can refer them in the next step.
Then switching back to ECS, when deploying your eureka-server tasks, inside your task definition configuration, there's an argument as placement_constraint
This will allow you to add a tag to your tasks so you can place those in the instances you assigned elastic ip addresses in the previous steps.
Now if this is all good and you deployed everything, you should be able to refer your eureka-client apps to that ip and have them registered.
I know this looks dirty and kind of complicated but the thing is Netflix OSS project for Eureka has missing parts which I believe is their proprietary implementation for their internal use and they don't want to share.
Another and probably a cooler way of doing this is using a Route53 domain or alias record for your instances so instead of using an elastic ip, you can also refer them using a DNS.