I have a kubernetes (0.15) cluster running on CoreOS instances on Amazon EC2
When I create a service that I want to be publicly accessible, I currently add some private IP addresses of the EC2 instances to the service description like so:
{
"kind": "Service",
"apiVersion": "v1beta3",
"metadata": {
"name": "api"
},
"spec": {
"ports": [
{
"name": "default",
"port": 80,
"targetPort": 80
}
],
"publicIPs": ["172.1.1.15", "172.1.1.16"],
"selector": {
"app": "api"
}
}
}
Then I can add these IPs to an ELB load balancer and route traffic to those machines.
But for this to work I need to have a maintain the list of all the machines in my cluster in all the services that I am running, which feels wrong.
What's the currently recommended way to solve this?
If I know the PortalIP of a service is there a way to make it routable in the AWS VPC infrastructure?
Is it possible to assign external static (Elastic) IPs to Services and have those routed?
(I know of createExternalLoadBalancer, but that does not seem to support AWS yet)
If someone will reach this question then I want to let you know that external load balancer support is available in latest kubernetes version.
Link to the documentation
You seem to have a pretty good understanding of the space - unfortunately I don't have any great workarounds for you.
CreateExternalLoadBalancer is indeed not ready yet - it's taking a bit of an overhaul of the services infrastructure to get it working for AWS because of how differently AWS's load balancer is from GCE's and Openstack's load balancers.
Unfortunately, there's no easy way to have the PortalIP or an external static IP routable directly to the pods backing the service, because doing so would require the routing infrastructure to update whenever any of the pods gets moved or recreated. You'd have to have the PortalIP or external IP route to the nodes inside the cluster, which is what you're already effectively doing with the PublicIPs field and ELB.
What you're doing with the load balancer right now is probably the best option - it's basically what CreateExternalLoadBalancer will do once it's available. You could instead put the external IPs of the instances into the PublicIPs field and then reach the service through one of them, but that's pretty tightly coupling external connectivity to the lifetime of the node IP you use.
Related
How do I run two Laravel Docker apps on the same server using one container per app and point to two domains?
Both apps are on the same AWS ec2 server
eg:
container one points to -> one.mydomain.com
container two points to -> two.mydomain.com
I'm new to this.
Is it even possible?
an apache solution would be preferable.
Yes, it is a possible and also different way to that and will suggest to use AWS services.
Using AWS load balancer and Host-based routing and different port publish for each app
Nginx
With AWS approach you need to run your container using ECS.
Create Load balancer
Create cluster
Create service
Attached service to Load balancer and update load balancer routing to Host-based routing app1.example.com, will route to app1
Repeat the above step for app2.
The above is the standard way to deal with the container using AWS.
You can read more about this gentle-introduction-to-how-aws-ecs-works-with-example-tutorial and Run containerized applications in production
With Nginx, you need to manage everything for your self.
Run both containers on EC2
Install Nginx
Update Nginx configuration to route traffic based on DNS
Update DNS Entry and will point to EC2 instance public IP, both DNS, for example, app1.example.com and app2.example.com will point to same EC2 instance but the Nginx will decide which app will serve the request.
server {
server_name app1.example.com;
location / {
proxy_pass http://127.0.0.1:HOSTPORT;
}
}
server {
server_name app2.example.com;
location / {
proxy_pass http://127.0.0.1:HOSTPORT;
}
}
I will recommend these two approaches, Nginx over apache but if you are interested you can check this apache-vhosts
I am migrating my spring cloud eureka application to AWS ECS and currently having some trouble doing so.
I have an ECS cluster on AWS in which two EC2 services was created
Eureka-server
Eureka-client
each service has a Task running on it.
QUESTION:
how do i establish a "docker network" amongst these two services such that i can register my eureka-client to the eureka-server's registry? Having them in the same cluster doesn't seem to do the trick.
locally i am able to establish a "docker network" to achieve this task. is it possible to have a "docker network" on AWS?
The problem here lies on the way how ECS clusters work. If you go to your dashboard and check out your task definition, you'll see an ip address which AWS assigns to the resource automatically.
In Eureka's case, you need to somehow obtain this ip address while deploying your eureka client apps and use it to register to your eureka-server. But of course your task definitions gets destroyed and recreated again somehow so you easily lose it.
I've done this before and there are couple of ways to achieve this. Here is one of the ways:
For the EC2 instances that you intend to spread ECS tasks as eureka-server or registry, you need to assign Elastic IP Addresses so you always know where to connect to in terms of a host ip address.
You also need to tag them properly so you can refer them in the next step.
Then switching back to ECS, when deploying your eureka-server tasks, inside your task definition configuration, there's an argument as placement_constraint
This will allow you to add a tag to your tasks so you can place those in the instances you assigned elastic ip addresses in the previous steps.
Now if this is all good and you deployed everything, you should be able to refer your eureka-client apps to that ip and have them registered.
I know this looks dirty and kind of complicated but the thing is Netflix OSS project for Eureka has missing parts which I believe is their proprietary implementation for their internal use and they don't want to share.
Another and probably a cooler way of doing this is using a Route53 domain or alias record for your instances so instead of using an elastic ip, you can also refer them using a DNS.
I am looking into using Consul, and cannot figure out what Consul defines as a 'service' (aka how does it start it / what information is passed to it on start.)
I have a working cluster, and have written custom init.d services in Ubuntu to try and test it out, but the init.d services are never called when I register them as service with Consul, and do not show up in the web UI either.
Local consul server config:
{
"bootstrap": true,
"server": true,
"log_level": "DEBUG",
"enable_syslog": true,
"datacenter": "dc1",
"data_dir": "data",
"acl_datacenter": "dc1",
"acl_default_policy": "allow",
"encrypt": "secret",
"acl_master_token": "secret",
"ui": true
}
Super simple script to try and determine information passed:
echo "$#" > /home/user/file1
set > /home/user/file2
My end goal is to simply have Consul start a custom script on a given node (with no web interface), that passes the information in Consul's service definition into it (ID and tags specifically).
I imagine I am understanding something elementally wrong, and would like any guidance you have to offer.
Thanks for reading!
Edit: For example, they show lower on that page how you can run two redis servers on multiple ports. How are the services being started and how is that information passed to them?
Edit 2: Thanks for the replies, they both helped me understand this better! (so I just accepted the one that came first.)
Consul has nothing to do with "starting services", it only provides information about services for another services.
Edit: For example, they show lower on that page how you can run two
redis servers on multiple ports. How are the services being started
and how is that information passed to them?
In this Example both Redis servers were started outside of Consul. For example we are using supervisord fot this or starting Redis as service via init.d
So, how it works:
You have started both of Redis instances by yourself (init.d or supervisord, doesn't matter).
Register this instances as services in consul. You can do it via config json file in consul dir. Or, otherwise you can register it via API (from terminal using curl, for example, in this case). Your own services you can register from itself, btw.
Consul's "services" is an architecture pattern. It allows you to build a dynamic service discovery system. This concept should not be garbled with other services concepts (in the OS like init.d or any other). Consul's service is just a virtual entity in Consul's service catalog.
The idea is following:
Something registers a Service entry in the Consul defined as (Name, Address, Node, HealthCheck)
Then this Service entry can be discovered by the Name in the Consul's DNS, or via REST API
Service availability and status is controlled via running health checks
You as a developer of the distributed system is responsible for registering, unregistering services and providing valid health checking routines.
I am developing a Lift application deployed on AWS Elastic Beanstalk Tomcat 7 container. My application requires sticky sessions when utilizing the Elastic Load Balancer.
Since my application uses the standard servlet stuff, it serves a JSESSIONID cookie to the client. I would like to configure AWS to use application-controlled session stickiness, where given the name of my cookie, it will keep track of the sessions. However, in Elastic Beanstalk Load Balancer configuration, I only see the ability to configure an AWS-managed cookie. I suppose this will work, but I would rather only serve one cookie and have the stickiness coincide with the sessions consistently with the way we have them configured in our application.
While it appears that we can configure application-controlled session stickiness in the EC2 settings associated with my EB instance, the settings we apply get clobbered any time we make changes in the EB console. This isn't terribly surprising behavior, but I would expect that we would soon forget this behavior and accidentally wipe out our settings.
Does anyone know if it is possible to make the stickiness sticky? :)
Elastic Load Balancer (ELB) supports for application-controlled session stickiness (http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-sticky-sessions.html#enable-sticky-sessions-application). If you want to do this, you can create an .ebextensions scripts to modified the Beanstalk ELB. You can't do this via Beanstalk Web Console.
To configure via .ebextensions, just create a directory named .ebextensions inside your root Beanstalk app and create a file (e.g: 00-load-balancer.config) inside the .ebextensions directory.
The .ebextensions/00-load-balancer.config file could be:
{
"Resources": {
"AWSEBLoadBalancer": {
"Type": "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties": {
"AppCookieStickinessPolicy": [
{
"PolicyName": "HttpSessionStickinessPolicy",
"CookieName": "JSESSIONID"
}
],
"Listeners": [
{
"LoadBalancerPort": 80,
"Protocol": "HTTP",
"InstancePort": 80,
"InstanceProtocol": "HTTP",
"PolicyNames": [
"HttpSessionStickinessPolicy"
]
}
]
}
}
}
}
The config will modify the ELB to listen port 80 and forward it to a certain EC2 instance port 80 based on HttpSessionStickinessPolicy policy. The HttpSessionStickinessPolicy will do application-controlled session stickiness.
Please refer to AWS Elastic Beanstalk (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-resources.html) and AWS CloudFormation (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb.html) documentation to know more about that.
Is there any way to make new instances added to an autoscaling group associate with an elastic IP? I have a use case where the instances in my autoscale group need to be whitelisted on remote servers, so they need to have predictable IPs.
I realize there are ways to do this programmatically using the API, but I'm wondering if there's any other way. It seems like CloudFormation may be able to do this.
You can associate an Elastic IP to ASG instances using manual or scripted API calls just as you would any other instance -- however, there is no automated way to do this. ASG instances are designed to be ephemeral/disposable, and Elastic IP association goes against this philosophy.
To solve your problem re: whitelisting, you have a few options:
If the system that requires predictable source IPs is on EC2 and under your control, you can disable IP restrictions and use EC2 security groups to secure traffic instead
If the system is not under your control, you can set up a proxy server with an Elastic IP and have your ASG instances use the proxy for outbound traffic
You can use http://aws.amazon.com/vpc/ to gain complete control over instance addressing, including network egress IPs -- though this can be time consuming
There are 3 approaches I could find to doing this. Cloud Formation will just automate it but you need to understand what's going on first.
1.-As #gabrtv mentioned use VPC, this lends itself to two options.
1.1-Within a VPC use a NAT Gateway to route all traffic in and out of the Gateway. The Gateway will have an Elastic IP and internet traffic then whitelist the NAT Gateway on your server side. Look for NAT gateway on AWS documentation.
1.2-Create a Virtual Private Gateway/VPN connection to your backend servers in your datacenter and route traffic through that.
1.2.a-Create your instances within a DEDICATED private subnet.
1.2.b-Whitelist the entire subnet on your side, any request from that subnet will be allowed in.
1.2.c Make sure your routes in the Subnet are correct.
(I'm skipping 2 on purpose since that is 1.2)
3.-The LAZY way:
Utilize AWS Opsworks to do two things:
1st: Allocate a RESOURCE Pool of Elastic IPs.
2nd: Start LOAD instances on demand and AUTO assign them one elastic ip from the Pool.
For the second part you will need to have the 24/7 instances be your minimum and the Load instances be your MAX. AWS Opsworks now allows Cloud Watch alarms to trigger instance startup so it is very similar to ASG.
The only disadvantage of Opsworks is that instances aren't terminated but stopped instead when the load goes down and that you must "create" instances beforehand. Also you depend on Chef solo to initiate your instances but is the only way to get auto assigning EIPs to your newly created instances that I could find.
Cheers!