For my containerized application, I want to Marathon to allocate the same host_port for the container's bridge network endpoint for all instances of that application. Specifying the host port runs the risk of resource exhaustion. Not specifying it will cause a random port to be picked for each instance.
I dont mind a randomly picked port so long as it is identical across all instances of my application. Is there a way to request Marathon to pick such a host port for my container endpoint.
I think what you are really after is service discovery / load balancing. Have a look at the Marathon docs at
https://mesosphere.github.io/marathon/docs/service-discovery-load-balancing
to get an overview.
Also, see the Docker networking docs at
https://mesosphere.github.io/marathon/docs/native-docker.html
You can probably either make use of the hostPort or the more general ports properties.
Related
I have a .NET Kafka client (using librdkafka via a Confluent's .NET client) running on a physical server with two network interfaces active. One is 10G and the other is 1G, both of them have static IP addresses assigned. Our networking team handles the configurations and is unlikely to change their practices for one application so I'd like to handle this client-side. I should also mention that the 1G interface and 10G interfaces are on the same network.
Since my Kafka cluster (3-node) is all 10G, I would like to require my application's consumer to bind to the 10G IP address. Looking through all of the documentation, I can't find anything about defining this on the client.
I would like to avoid any "hacky" solutions like setting Kafka to deny any non-whitelisted IP addresses or DNS tomfoolery.
Thanks in advance!
Just to be sure.., Do you know if your server is doing interface bonding (means the traffic will load balance between each interface, though, it's unlikely to do binding on different speed interfaces..)?
If not, as your two interfaces are on the same network, it means you will only use one interface to reach the network (except if you have exotic routing config). This interface will be defined by your default route.
If it's a Linux server, you can do as follows :
ip route
default via X.X.X.X dev YOURDEFAULTINTERFACE
If it's the 10G, you have nothing to do and you can be sure it will use this interface.
If not, you cannot do anything Kafka side, as it's purely OS settings side. Your Kernel will forward any traffic through this default interface.
Again.. I insist on the fact that this is because both your interfaces are in the same network.
If you have any doubts with this, please share your network configuration in details ( result of ip addr and ip route)
Yannick
We have kafka and zookeeper installed on a single AWS EC2 instance. We have kafka producers and consumers running on separate ec2 instances which are on the same VPC and have the same security group as that of kafka instance. In the producer or consumer config we are using the internal IP address of the kafka server to connect to it.
But we have noticed that we need to mention the public IP address of the EC2 server as advertised.listeners for letting the producers and consumers connect to the Kafka server:
advertised.listeners=PLAINTEXT://PUBLIC_IP:9092
Also we have to whitelist the public ip addresses and open traffic on 9092 port of each of our ec2 servers running producers and consumers.
We want the traffic to flow using internal IP addresses. Is there a way we need not whitelist the public ip addresses and open traffic on 9092 port for each one of our servers running producer or consumer?
If you don't want to open access to all for either one of your servers, I would recommend adding a proper high performance web server like nginx or Apache HTTPD in front of your applications' servers acting as a reverse proxy. This way you could also add SSL encryption and your server stays on a private network while only the web server would be exposed. It’s very easy and you can find many tutorials on how to set it up. Like this one: http://webapp.org.ua/sysadmin/setting-up-nginx-ssl-reverse-proxy-for-tomcat/
Because of the variable nature of the ecosystem that kafka may need to work in, it only makes sense that you are explicit in declaring the locations which kafka can use. The only way to guarantee that external parts of any system can be reached via an ip address is to ensure that you are using external ip addresses.
I have searched before writing this ... All i found is at certain point they are using load balancer hardware or software. But the thing i need is without hardware and Software can we do the load balancing ?.
While i was searching for this i came across the below statement.
"Another way to distribute requests is to have a single virtual IP (VIP) that all clients use. And for the computer on that 'virtual' IP to forward the request to the real servers"
Could you please anyone let me know how to do the Virtual IP load balancing?.
I have searched lots of article but i could not find anything related to VIP configuration or setup. All i found is only theoretical materials.
I need to divide the incoming requests into two applications. In this case both application server should be up and running.
Below is the architecture:
Application Node 1 : 10.66.204.10
Application Node 2 : 10.66.204.11
Virtual IP: 10.66.204.104
Run an instance of Nginx and use it as a load balancing Gateway for connections. There's no difference using virtual IPs to actual IPs - although it helps if your cloud setup is on LAN based IPs for both security and ease.
Depending on your setup there's two paths to go:
Dynamically assign connections to a server. This can be done on a split (evenly distributed) or on one instance until it fills up - then overflow.
Each function has it's own IP assigned. For example, you can configure the Gateway to serve static content itself and request dynamic content from other servers.
Configuring Nginx is a large job. However, it's a relatively well documented process and it shouldn't be hard for you to find a guide that suits your needs.
We have a web instance (nginx) behind a ELB which we manually power on when required.
The web app starts up quickly and returns a successful 200 response when we run wget locally.
However the website will not load as the ELB isn't sending healthcheck requests to the instance. I can confirm this by viewing the nginx access logs.
The workaround I've been using is to remove the web instance from the ELB and add it back in.
This seem to activate the healthchecks again and they are visible from our access logs.
I've edited our Healthcheck settings to allow a longer timeout and raise the Unhealthy Threshold to 3 but this has made no difference.
Currently our Health Check Config is:
Ping Target: HTTPS:443/login
Timeout: 10 sec
Interval: 12 sec
Unhealthy: 2
Healthy: 2
Listener:
HTTPS 443 to HTTPS 443 SSL Cert
The ELB and web instance are both on the same public VPC Security Group which has http/https opened to 0.0.0.0/0
Can anyone help me figure out why the ELB Health checks aren't kicking in as soon as the web instance has started? Is this by design or is there a way of automatically initiating the checks? Thank you.
Niall
Does your instance come up with a different IP address each time you start it?
Elastic Load Balancing registers your load balancer with your EC2 instances using the IP addresses that are associated with your instances. When an instance is stopped and then restarted, the IP address associated with your instance changes. Your load balancer cannot recognize the new IP address, which prevents it from routing traffic to your instances.
— http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/TerminologyandKeyConcepts.html#registerinstance
It would seem like the appropriate approach to getting the instance re-associated would be for code running on the web server instance to programmatically register itself with the load balancer via the API when the startup process determines that the instance is ready for web traffic.
Update:
Luke#AWS: "You should be de-registering from your ELB during a stop/start."
— https://forums.aws.amazon.com/thread.jspa?messageID=463835
I'm curious what the console shows as the reason why the instance isn't active in the ELB. There does appear to be some kind of interaction between ELB and EC2 where ELB has some kind of awareness of an instance's EC2 state (e.g. "stopped") that goes beyond just the health checks. This isn't well-documented, but I would speculate that ELB, based on that awareness, decides that it isn't worth bothering with the health checks, and the console may provide something useful to at least confirm this.
It's possible that, given sufficient time, ELB might become aware that the instance is running again and start sending health checks, but it's also possible that instances have a hidden global meta-identifier separate from i-xxxxxx and that a stopped and restarted instance is, from the perspective of this identifier, a different instance.
...but the answer seems to be that stopping an instance and restarting it requires re-registration with the ELB.
I'm running into some deployment issues using Akka remoting to implement a small search application.
I want to deploy my ActorSystem on a set of local cluster machines to use them as workers, but I'm a bit confused for what to put into my application.conf to make this happen. For example, I can use:
akka.remote {
transport = "akka.remote.netty.NettyRemoteTransport"
netty {
hostname = "0.0.0.0"
port = 2552
}
}
Each worker just runs the ActorSystem at startup.
This allows my worker machines to bind to their address when they start up, but then they refuse to listen to messages:
beaker-24: [ERROR] ... dropping message DaemonMsgWatch for non-local recipient akka://SearchService#beaker-24:2552/remote at akka://SearchService#0.0.0.0:2552
The documentation I've found for this so far only discusses deployment on my localhost, which is not so useful :). I'm hoping there is a way to do this without generating a separate configuration for each host.
Update:
Using an empty string as the hostname allows for contacting the host via the normal IP address. Addressing using the hostname itself doesn't work at the moment.
Setting “0.0.0.0” as host name will currently basically disable remoting, because that is not a legal IP to send to. Background: actor references get the configured IP (or host name) inserted in their address part when they leave the local system, and that is exactly their “pointer home” for other systems to send messages back.
There has been an effort by Scott which would enable a system to receive replies to a different address here, but that is not included yet—and we may well chose a different solution to this problem.