Creating a RethinkDB cluster on Amazon ECS - rethinkdb

I am using the official Docker image for RethinkDB. I am trying to use AWS EC2 Container Services to create a RethinkDB cluster. I can easily get stand alone instances to run, but have had no luck creating a RethinkDB cluster.
I have tried various security group settings. I even made everything wide open, but no luck. When I launch the Docker image, I pass in --bind all and --join [ip]:29015, but nothing.
Has anyone got this to work?

The default networking for docker on amazon ECS is the docker0 bridge. This means multiple containers on the same EC2 instance can talk to each other through the bridge but not to other EC2 instances and containers across the ECS cluster.
You could set the networkMode in your task definition to 'host' which should then let you use the network on your EC2 instances directly and use the security groups you have defined See http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#network_mode.
The alternative is to setup an overlay network using something like flannel, weave, openvswitch etc. See https://aws.amazon.com/blogs/apn/architecting-microservices-using-weave-net-and-amazon-ec2-container-service/ for an example using weave.

Related

How to host Moqui on AWS EC2

Is there a way to host Moqui on AWS? I was trying to host Moqui using a EC2 instance but couldn't figure out a way to connect them.
The Run and Deploy document on moqui.org has a section for a simple recommended deployment using ElasticBeanstalk and RDS:
https://www.moqui.org/m/docs/framework/Run+and+Deploy#AWSElasticBeanstalkandRDS
With more details about how you want to set things up on AWS the answer to how might vary from this.
For clustered setups things get more involved to get the right settings for Hazelcast AWS discovery and it is best to use an external ElasticSearch server like an AWS ElasticSearch instance and configure Moqui using environment variables to use the Java REST Client mode instead of the Embedded Node mode. Settings for the moqui-hazelcast and moqui-elasticsearch components can be seen in the MoquiConf.xml file in each component.

How to deploy Netfilex Eureka Server and Eureka Client with docker Network on AWS ECS cluster

I am migrating my spring cloud eureka application to AWS ECS and currently having some trouble doing so.
I have an ECS cluster on AWS in which two EC2 services was created
Eureka-server
Eureka-client
each service has a Task running on it.
QUESTION:
how do i establish a "docker network" amongst these two services such that i can register my eureka-client to the eureka-server's registry? Having them in the same cluster doesn't seem to do the trick.
locally i am able to establish a "docker network" to achieve this task. is it possible to have a "docker network" on AWS?
The problem here lies on the way how ECS clusters work. If you go to your dashboard and check out your task definition, you'll see an ip address which AWS assigns to the resource automatically.
In Eureka's case, you need to somehow obtain this ip address while deploying your eureka client apps and use it to register to your eureka-server. But of course your task definitions gets destroyed and recreated again somehow so you easily lose it.
I've done this before and there are couple of ways to achieve this. Here is one of the ways:
For the EC2 instances that you intend to spread ECS tasks as eureka-server or registry, you need to assign Elastic IP Addresses so you always know where to connect to in terms of a host ip address.
You also need to tag them properly so you can refer them in the next step.
Then switching back to ECS, when deploying your eureka-server tasks, inside your task definition configuration, there's an argument as placement_constraint
This will allow you to add a tag to your tasks so you can place those in the instances you assigned elastic ip addresses in the previous steps.
Now if this is all good and you deployed everything, you should be able to refer your eureka-client apps to that ip and have them registered.
I know this looks dirty and kind of complicated but the thing is Netflix OSS project for Eureka has missing parts which I believe is their proprietary implementation for their internal use and they don't want to share.
Another and probably a cooler way of doing this is using a Route53 domain or alias record for your instances so instead of using an elastic ip, you can also refer them using a DNS.

Elastic Search on AWS Fargate

Facing a problem while deploying elasticsearch on AWS Fargate
Following steps were done :
customized my docker image and pushed to AWS ECR.
task definition for my elasticsearch service
elastic search service fails on bootstrap following is the exception
[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
its known es issue for es 5.0 onwards. Solution provided by es is as follows
sysctl -w vm.max_map_count=262144
https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-cli-run-prod-mode
is it possible to apply this command on AWS Fargate as we donot have access to host ?
Update : Elastic Search has provided an option to avoid mmaps check on bootup but not yet released as of now
https://github.com/elastic/elasticsearch/pull/32421
https://discuss.elastic.co/t/elk-on-aws-fargate/153967/4
It looks like you cannot do that.
Let me explain:
Docker actually wraps a process and runs it using the kernel installed on the host machine.
Changing "vm.max_map_count" is actually configuring the Linux kernel of the host machine.
When the host machine is under your control, such as when you use EC2, you can configure the kernel of the host machine by applying "user data" on your Launch Configuration. (See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html)
But where host machine is not under your control, as in the case of Fargate, you cannot change the host and the kernel settings it runs. The whole idea of Fargate is to run stateless Docker images, images that don't make any assumptions on the host they run inside.
However, in Elasticsearch, the application itself depends on specific host configuration (the "vm.max_map_count" setting) which means that it indeed does make assumptions about its host and therefore it cannot run on generic host such as Fargate (unless you disable this check, which is not a good idea for production environment).

Launching ECS service from our own AMI

I am trying to deploy my sample Spring Cloud Microservice into AWS ECS service. I found that Fargate method and EC2 launch method. Here actually I am looking for to launch ECS service from my own EC2 instance. Now I have only Ubuntu 16.04 AMI. I am planning to use AWS ECS optimized AMI as my EC2. So I need to launch ECS using my own EC2. So I am confused about the launching by optimized my own EC2.
I am seeking useful links or documentation for launching using above method. Since I am beginning stage on AWS Cloud.
The AMI you've configured for your instance doesn't matter (generally). Once your EC2 instance is created, go over to the ECS section of AWS and create a cluster containing your host.
In ECS you need to define a task containing your container, the repo to pull it from, and all the other necessary details. From here you can go to your cluster and launch your task on your host, either manually, or by defining a service to automate the launching for you.

How can we use group managed service account for windows containers in kubernetes

I was able to use gMSA in docker containers using the below flag in docker run
--security-opt credentialspec=file://myuser.json
How can I achieve the same in kubernetes cluster?
I have multiple windows nodes in my k8s cluster. I have followed all steps as in https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts
or is there any alternate way to set domain user context for kubernetes containers/pods?
Any help would be much appreciated

Resources