I am using an EC2 instance for my meilisearch and i am wondering if i could install redis on the same EC2 instance.
How do you manage to deploy a redis instance and a search instance, do you do as multiple instances or have only one instance ?
I am using an EC2 instance for my meilisearch and i am wondering if i
could install redis on the same EC2 instance.
There's nothing stopping you from installing all the software you want on a single EC2 instance. You would only need to make sure your server has enough CPU and RAM resources available to run both services.
How do you manage to deploy a redis instance and a search instance, do
you do as multiple instances or have only one instance ?
This part of your question is too broad. Are you trying to minimize costs? Maximize throughput? Is this for testing, or a live production environment? Will you need fault-tolerance and automatic disaster recovery?
There is no single "best" or "correct" answer to how you run these services, it all depends on your specific needs.
Related
I use three services in my docker-compose file: Django API, Celery, and Redis. I can run this using AWS ECS for which I have to create 3 services (one for each service) and at least three tasks for each service created which seems to be expensive when compared to AWS EC2 instance where I can deploy 3 services by running docker-compose command and also I can attach Auto-scaling groups for horizontal scaling by creating an Image of my instance and attach a Load balancer to it.
I don't understand why should one use ECS which is mainly used for auto-scaling multi-service dockerized applications when I can do the same with simple EC2 auto-scaling groups. Am I missing something?
The "why should one use ECS" is a question of "why to use an abstraction" at its heart. To answer it, ECS bills itself as a "fully managed container orchestration service" and provides an abstraction on top of a cluster of EC2 instances (run and managed by you or by AWS, depending upon the choices you make). It will come down to these choices which you want to consider before choosing to use (or NOT choosing ) ECS:
Do you want to do mapping of containers to EC2 instances yourself OR do you want your container orchestration engine to take care of it for you?
Do you want to be even aware of underlying EC2 instances and hence be responsible for running and patching them OR do you want your container orchestration engine to take care of it for you?
Do you want to own responsibility of redeploying newly built and tested containers on all your EC2 instances individually OR do you want your container orchestration engine and associated compute platform (eg. Fargate) to take care of it for you?
I have an application which requires a strong GPU, and it runs on an EC2 instance of type p2.xlarge which is ideal for that kind of tasks. Because the p2.xlarge instances are quiet expensive though, I keep them offline and only start them when necessary.
Sometimes I do multiple calculations on 1 instance, and sometimes I even use multiple instances at the same time.
I've written an application in Angular that can visualize the results of these calculations. Which I've only tested in an environment where the angular application is hosted on that same instance.
But since I have multiple instances, it would be ideal to visualize them all on a single webpage. So that leads me to the diagram below, where a single instance is a like a portal or management console that controls the other instances.
Now, to get things moving, I would like to setup this front-end server as soon as possible. But there are so many instance types to choose from. What would be the best instance type for this front-end server for a dashboard / portal that controls other aws instances. The only requirements are:
of course it should be able to run a nodejs server (and a minimalistic db for storing logins).
it should be able to start/stop other aws instances.
it should be able to communicate to other aws instances using websockets, and as far as I'm concerned, that shouldn't even really be over the internet, that can be within the aws network.
Well ,
of course it should be able to run a nodejs server (and a minimalistic db for storing logins).
Sounds like you need a small machine .
I would suggest using the T2/T3 family . very cheap and can be configured without bursting limits which gives you all the power you need for a very low price .
it should be able to start/stop other aws instances.
Not a problem ,
Create an IAM role which have permissions to EC2 and when you
launch your instance , give it that IAM role.
It will be able to do what ever you grant it to do with the api.
Pay attention to the image you use ,
if you take the Amazon Linux 2 you get the aws-cli preinstalled ,
it's pretty nice .
Read more about IAM roles here.
it should be able to communicate to other aws instances using websockets, and as far as I'm concerned, that shouldn't even really be over the internet, that can be within the aws network.
Just make sure you launch all instances in the same VPC .
when machines are in the same vpc they can communicate with each other only with internal ips .
You can create a new VPC like here
Or , just use the default one .
after you launch the instance you will
get it's internal IP.
It seems to me that there are two alternate ways of having an autoscaled web service running on EC2 instances behind an ELB:
1) create an Auto Scaling Launch Config that specifies the image id of my custom AMI (and the instance type to use). Then, when the Auto Scaling trigger is triggered, it will simply spin up new EC2 instances using that AMI.
2) Use ELB and ECS instead as ECS seems to have its own Auto Scaling feature.
In what circumstances is it better to use ECS?
The two options are not mutually exclusive. To answer your last question first, you use ECS when you run containerized applications. You can scale these applications using the Service AutoScaling service in ECS. This will help you bring up additional containers when you're running short of resources to attend incoming requests.
This is different from scaling EC2 instances with a Launch Configuration. You'll need additional instances when you run out of resources in order to spin up new containers in your ECS cluster. By the way, you should always use a Launch Configuration in order bring new EC2 instances within an AutoScaling group bound to your ECS cluster, since it makes everything a lot easier. See for example this tutorial.
As for the ELB, in ECS you actually are better off using ALB. In containerized applications, you need to map their exposed ports to random ports in the host machine. This makes the registration with the load balance a lot more complicated. However, ALB is integrated with ECS so that, whenever you launch a new task, the ECS Service running the task requests a registration of the task as a target for the ALB (see this link for a more detailed explanation). It takes care of everything on your behalf.
I've already got an RDS instance configured and running, but currently we're still running on our old web host. We'd like to decrease latency by hosting the code within AWS.
Two concerns:
1) Future scalability
2) Redundancy ... not a huge concern but AWS does occasionally go down.
Has anyone had this problem where they just need to cheaply run what is essentially a database interface via a language such as PHP/Ruby, in 2 regions? (with one as a failover)
Does Amazon offer something that automatically manages resources, that's also cost effective?
Amazon's Elastic Beanstalk service supports both PHP and Ruby apps natively, and allows you to scale your app servers automatically.
In a second region, run a slave RDS instance off of your master (easy to set up in RDS) and have another beanstalk setup there ready as a failover.
So I'm trying to figure out what's involved with doing the following using EC2:
I've got a desktop application that sometimes has to do cpu-intensive operations. What I need to do is offload these tasks to a cloud server which will run a version of the app specifically to handle the running of that task and return the results.
There will be situations where multiple instances of the desktop app are being run by different users and several might request offloading of tasks concurrently.
My question: Can the desktop app establish its own new EC2 instance to do the work and, if so, does is there a single ip address that it connects to to start the instance creation? When the instance is created, does it get its own IP address?
As you can see by my question I'm misunderstanding some key part of the EC2 system. Some clarification would be much appreciated
Amazon has an EC2 API that can be used to create, modify, or delete instances. This API is available in many of the popular programming languages so your desktop app should be able able to stat an EC2 instance and offload the work automatically.
http://www.programmableweb.com/api/amazon-ec2/links
Each new EC2 instance has its own unique public IP address which can be retrieved via the APIs mentioned above.
Amazon EC2 has a free usage tier that allows you to run one micro instance at a time, free for a year. So go ahead and try it out, even if you run more than one instance at a time, its super cheap. At least use the free micro instance to become farmiliar with how EC2 works.
In your code
Detect need to offload computation
Use EC2 API to create another instance of a saved virtual machine state you previously
setup
Use the API to get the IP address of the new instance
Connect to the IP address of the instance you just started and tell it what work to do