Using EC2 to run CPU-heavy tasks - amazon-ec2

So I'm trying to figure out what's involved with doing the following using EC2:
I've got a desktop application that sometimes has to do cpu-intensive operations. What I need to do is offload these tasks to a cloud server which will run a version of the app specifically to handle the running of that task and return the results.
There will be situations where multiple instances of the desktop app are being run by different users and several might request offloading of tasks concurrently.
My question: Can the desktop app establish its own new EC2 instance to do the work and, if so, does is there a single ip address that it connects to to start the instance creation? When the instance is created, does it get its own IP address?
As you can see by my question I'm misunderstanding some key part of the EC2 system. Some clarification would be much appreciated

Amazon has an EC2 API that can be used to create, modify, or delete instances. This API is available in many of the popular programming languages so your desktop app should be able able to stat an EC2 instance and offload the work automatically.
http://www.programmableweb.com/api/amazon-ec2/links
Each new EC2 instance has its own unique public IP address which can be retrieved via the APIs mentioned above.
Amazon EC2 has a free usage tier that allows you to run one micro instance at a time, free for a year. So go ahead and try it out, even if you run more than one instance at a time, its super cheap. At least use the free micro instance to become farmiliar with how EC2 works.
In your code
Detect need to offload computation
Use EC2 API to create another instance of a saved virtual machine state you previously
setup
Use the API to get the IP address of the new instance
Connect to the IP address of the instance you just started and tell it what work to do

Related

Having Redis and Meilisearch on a EC2 instance

I am using an EC2 instance for my meilisearch and i am wondering if i could install redis on the same EC2 instance.
How do you manage to deploy a redis instance and a search instance, do you do as multiple instances or have only one instance ?
I am using an EC2 instance for my meilisearch and i am wondering if i
could install redis on the same EC2 instance.
There's nothing stopping you from installing all the software you want on a single EC2 instance. You would only need to make sure your server has enough CPU and RAM resources available to run both services.
How do you manage to deploy a redis instance and a search instance, do
you do as multiple instances or have only one instance ?
This part of your question is too broad. Are you trying to minimize costs? Maximize throughput? Is this for testing, or a live production environment? Will you need fault-tolerance and automatic disaster recovery?
There is no single "best" or "correct" answer to how you run these services, it all depends on your specific needs.

best EC2 instance type for a management console of other instances

I have an application which requires a strong GPU, and it runs on an EC2 instance of type p2.xlarge which is ideal for that kind of tasks. Because the p2.xlarge instances are quiet expensive though, I keep them offline and only start them when necessary.
Sometimes I do multiple calculations on 1 instance, and sometimes I even use multiple instances at the same time.
I've written an application in Angular that can visualize the results of these calculations. Which I've only tested in an environment where the angular application is hosted on that same instance.
But since I have multiple instances, it would be ideal to visualize them all on a single webpage. So that leads me to the diagram below, where a single instance is a like a portal or management console that controls the other instances.
Now, to get things moving, I would like to setup this front-end server as soon as possible. But there are so many instance types to choose from. What would be the best instance type for this front-end server for a dashboard / portal that controls other aws instances. The only requirements are:
of course it should be able to run a nodejs server (and a minimalistic db for storing logins).
it should be able to start/stop other aws instances.
it should be able to communicate to other aws instances using websockets, and as far as I'm concerned, that shouldn't even really be over the internet, that can be within the aws network.
Well ,
of course it should be able to run a nodejs server (and a minimalistic db for storing logins).
Sounds like you need a small machine .
I would suggest using the T2/T3 family . very cheap and can be configured without bursting limits which gives you all the power you need for a very low price .
it should be able to start/stop other aws instances.
Not a problem ,
Create an IAM role which have permissions to EC2 and when you
launch your instance , give it that IAM role.
It will be able to do what ever you grant it to do with the api.
Pay attention to the image you use ,
if you take the Amazon Linux 2 you get the aws-cli preinstalled ,
it's pretty nice .
Read more about IAM roles here.
it should be able to communicate to other aws instances using websockets, and as far as I'm concerned, that shouldn't even really be over the internet, that can be within the aws network.
Just make sure you launch all instances in the same VPC .
when machines are in the same vpc they can communicate with each other only with internal ips .
You can create a new VPC like here
Or , just use the default one .
after you launch the instance you will
get it's internal IP.

Akka, AMI - discover remote actors for database access

I am working on a prototype for a client where, on AWS auto-scaling is used to create new VMs from Amazon Machine Images (AMIs), using Akka.
I want to have just one actor, control access to the database, so it will create new children, as needed, and queue up requests that go beyond a set limit.
But, I don't know the IP address of the VM, as it may change as Amazon adds/removes VMs based on activity.
How can I discover the actor that will be used to limit access to the database?
I am not certain if clustering will work (http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html), and this question and answers are from 2011 (Akka remote actor server discovery), and possibly routing may solve this problem: http://doc.akka.io/docs/akka/2.4.16/scala/routing.html
I have a separate REST service that just goes to the database, so it may be that this service will need to do the control before it goes to the actors.

Cheapest, future-scalable way to host a HTTPS PHP Website on AWS?

I've already got an RDS instance configured and running, but currently we're still running on our old web host. We'd like to decrease latency by hosting the code within AWS.
Two concerns:
1) Future scalability
2) Redundancy ... not a huge concern but AWS does occasionally go down.
Has anyone had this problem where they just need to cheaply run what is essentially a database interface via a language such as PHP/Ruby, in 2 regions? (with one as a failover)
Does Amazon offer something that automatically manages resources, that's also cost effective?
Amazon's Elastic Beanstalk service supports both PHP and Ruby apps natively, and allows you to scale your app servers automatically.
In a second region, run a slave RDS instance off of your master (easy to set up in RDS) and have another beanstalk setup there ready as a failover.

steps to securing amazon EC2+EBS

I have just installed a fedora linux AMI on amazon EC2, from the amazon collection. I plan to connect it to EBS storage. Assuming I have done nothing more than the most basic steps, no password changed, nothing extra has been done at this stage other than the above.
Now, from this point, what steps should I take to stop the hackers and secure my instance/EBS?
Actually there is nothing different here from securing any other Linux server.
At some point you need to create your own image (AMI). The reason for doing this is that the changes you will make in an existing AMI will be lost if your instance goes down (which could easily happen as Amazon doesn't guarantee that an instance will stay active indefinitely). Even if you do use EBS for data storage, you will need to do the same mundane tasks configuring the OS every time the instance goes down. You may also want to stop and restart your instance in certain periods or in case of peak traffic start more than one of them.
You can read some instructions for creating your image in the documentation. Regarding security you need to be careful not to expose your certification files and keys. If you fail on doing this, then a cracker could use them to start new instances that will be charged for. Thankfully the process is very safe and you should only pay attention in a couple of points:
Start from an image you trust. Users are allowed to create public images to be used by everyone and they could either by mistake or in purpose have left a security hole in them that could allow someone to steal your identifiers. Starting from an official Amazon AMI, even if it lacks some of the features you require, is always a wise solution.
In the process of creating an image, you will need to upload your certificates in a running instance. Upload them in a location that isn't bundled in the image (/mnt or /tmp). Leaving them in the image is insecure, since you may need to share your image in the future. Even if you are never planning to do so, a cracker could exploit a security fault in the software your using (OS, web server, framework) to gain access in your running instance and steal your credentials.
If you are planning to create a public image, make sure that you leave no trace of your keys/identifies in it (in the command history of the shell for example).
What we did at work is we made sure that servers could be accessed only with a private key, no passwords. We also disabled ping so that anyone out there pinging for servers would be less likely to find ours. Additionally, we blocked port 22 from anything outside our network IP, wit the exception of a few IT personnel who might need access from home on the weekends. All other non-essential ports were blocked.
If you have more than one EC2 instance, I would recommend finding a way to ensure that intercommunication between servers is secure. For instance, you don't want server B to get hacked too just because server A was compromised. There may be a way to block SSH access from one server to another, but I have not personally done this.
What makes securing an EC2 instance more challenging than an in-house server is the lack of your corporate firewall. Instead, you rely solely on the tools Amazon provides you. When our servers were in-house, some weren't even exposed to the Internet and were only accessible within the network because the server just didn't have a public IP address.

Resources