Im hoping to move an application to AWS.
I would like to use the AutoScaling so not all my EC2 instances are in use when the application use is quiet.
My problem is.....
I have one service account used for all communication between the various components of the application and the servers in that environment
We have a security exception with my company which allows us to use the service account to perform its actions on each individual server.
Every time we introduce a new server to the environment, we have to request that the security team update our exception list to allow the new server in as well.
There is no automatic method for doing this. We have to submit a request to the security team asking for the new server to be added to the exception.
So while AutoScaling would be prefect how can it work in this case if each time a server is added the security team needs to be notified so they can add the new server to the exception list?
Thanks
You can get notifications when your autoscale group scales either up or down. SNS can send a variety of things, including SMS (text) messages to a cell phone.
While this would work, it is incredibly manual. The goal of an autoscale group is to let the environment expand and contract without human intervention. I personally would not implement this as, depending on the availability of your security team they may be a bottle neck to scaling up. If for some reason they miss the scale up event that signals them to do something then you've got orphan machines that you're paying for that are doing nothing.
Additionally, there are also ways to script the provisioning of a new machine. Perhaps there is a way to add what you want automatically. AWS calls this userdata - you can learn a bit more about it from the AWS EC2 docs.
But ultimately I'd really take a step back and look at your architecture. If you can't script the machine provisioning then autoscaling is not very worthwhile - it's just plain "have devops add another machine if needed and hope they remember to take it down when it's not needed".
Related
I have an application which requires a strong GPU, and it runs on an EC2 instance of type p2.xlarge which is ideal for that kind of tasks. Because the p2.xlarge instances are quiet expensive though, I keep them offline and only start them when necessary.
Sometimes I do multiple calculations on 1 instance, and sometimes I even use multiple instances at the same time.
I've written an application in Angular that can visualize the results of these calculations. Which I've only tested in an environment where the angular application is hosted on that same instance.
But since I have multiple instances, it would be ideal to visualize them all on a single webpage. So that leads me to the diagram below, where a single instance is a like a portal or management console that controls the other instances.
Now, to get things moving, I would like to setup this front-end server as soon as possible. But there are so many instance types to choose from. What would be the best instance type for this front-end server for a dashboard / portal that controls other aws instances. The only requirements are:
of course it should be able to run a nodejs server (and a minimalistic db for storing logins).
it should be able to start/stop other aws instances.
it should be able to communicate to other aws instances using websockets, and as far as I'm concerned, that shouldn't even really be over the internet, that can be within the aws network.
Well ,
of course it should be able to run a nodejs server (and a minimalistic db for storing logins).
Sounds like you need a small machine .
I would suggest using the T2/T3 family . very cheap and can be configured without bursting limits which gives you all the power you need for a very low price .
it should be able to start/stop other aws instances.
Not a problem ,
Create an IAM role which have permissions to EC2 and when you
launch your instance , give it that IAM role.
It will be able to do what ever you grant it to do with the api.
Pay attention to the image you use ,
if you take the Amazon Linux 2 you get the aws-cli preinstalled ,
it's pretty nice .
Read more about IAM roles here.
it should be able to communicate to other aws instances using websockets, and as far as I'm concerned, that shouldn't even really be over the internet, that can be within the aws network.
Just make sure you launch all instances in the same VPC .
when machines are in the same vpc they can communicate with each other only with internal ips .
You can create a new VPC like here
Or , just use the default one .
after you launch the instance you will
get it's internal IP.
I am working on a prototype for a client where, on AWS auto-scaling is used to create new VMs from Amazon Machine Images (AMIs), using Akka.
I want to have just one actor, control access to the database, so it will create new children, as needed, and queue up requests that go beyond a set limit.
But, I don't know the IP address of the VM, as it may change as Amazon adds/removes VMs based on activity.
How can I discover the actor that will be used to limit access to the database?
I am not certain if clustering will work (http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html), and this question and answers are from 2011 (Akka remote actor server discovery), and possibly routing may solve this problem: http://doc.akka.io/docs/akka/2.4.16/scala/routing.html
I have a separate REST service that just goes to the database, so it may be that this service will need to do the control before it goes to the actors.
Beanstalk does the typical thing an autoscaling app would do: waits for demand (in the form of web requests) to spike, then it spawns new slaves for the cluster based on the registered AMI.
My question is, if I have an actor-based system, can I manually trigger the expansion of the grid? or do I have to route requests to my agents through some kind of http pipeline that AWS would recognize?
Seems like other solutions on AWS (e.g. Hadoop) allow for explicit control. Also, other solutions on top of AWS (e.g. Heroku) do as well.
This is an old question but I think an answer might be useful for those that come after. This looks like a case for AWS Autoscaling. See http://aws.amazon.com/documentation/autoscaling/.
I have just installed a fedora linux AMI on amazon EC2, from the amazon collection. I plan to connect it to EBS storage. Assuming I have done nothing more than the most basic steps, no password changed, nothing extra has been done at this stage other than the above.
Now, from this point, what steps should I take to stop the hackers and secure my instance/EBS?
Actually there is nothing different here from securing any other Linux server.
At some point you need to create your own image (AMI). The reason for doing this is that the changes you will make in an existing AMI will be lost if your instance goes down (which could easily happen as Amazon doesn't guarantee that an instance will stay active indefinitely). Even if you do use EBS for data storage, you will need to do the same mundane tasks configuring the OS every time the instance goes down. You may also want to stop and restart your instance in certain periods or in case of peak traffic start more than one of them.
You can read some instructions for creating your image in the documentation. Regarding security you need to be careful not to expose your certification files and keys. If you fail on doing this, then a cracker could use them to start new instances that will be charged for. Thankfully the process is very safe and you should only pay attention in a couple of points:
Start from an image you trust. Users are allowed to create public images to be used by everyone and they could either by mistake or in purpose have left a security hole in them that could allow someone to steal your identifiers. Starting from an official Amazon AMI, even if it lacks some of the features you require, is always a wise solution.
In the process of creating an image, you will need to upload your certificates in a running instance. Upload them in a location that isn't bundled in the image (/mnt or /tmp). Leaving them in the image is insecure, since you may need to share your image in the future. Even if you are never planning to do so, a cracker could exploit a security fault in the software your using (OS, web server, framework) to gain access in your running instance and steal your credentials.
If you are planning to create a public image, make sure that you leave no trace of your keys/identifies in it (in the command history of the shell for example).
What we did at work is we made sure that servers could be accessed only with a private key, no passwords. We also disabled ping so that anyone out there pinging for servers would be less likely to find ours. Additionally, we blocked port 22 from anything outside our network IP, wit the exception of a few IT personnel who might need access from home on the weekends. All other non-essential ports were blocked.
If you have more than one EC2 instance, I would recommend finding a way to ensure that intercommunication between servers is secure. For instance, you don't want server B to get hacked too just because server A was compromised. There may be a way to block SSH access from one server to another, but I have not personally done this.
What makes securing an EC2 instance more challenging than an in-house server is the lack of your corporate firewall. Instead, you rely solely on the tools Amazon provides you. When our servers were in-house, some weren't even exposed to the Internet and were only accessible within the network because the server just didn't have a public IP address.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have thought a lot recently about the different hosting types that are available out there. We can get pretty decent latency (average) from an EC2 instance in Europe (we're situated in Sweden) and the cost is pretty good. Obviously, the possibility of scaling up and down instances is amazing for us that's in a really expansive phase right now.
From a logical perspective, I also believe that Amazon probably can provide better availability and stability than most hosting companies on the market. Probably it will also outweigh the need of having a phone number to dial when we wonder anything and force us to google the things by ourselves :)
So, what should we be concerned about if we were about to run our web server on EC2? What are the pro's and cons?
To clarify, we will run a pretty standard LAMP configuration with memcached added probably.
Thanks
So, what should we be concerned about if we were about to run our web server on EC2? What are the pro's and cons?
The pros and cons of EC2 are somewhat dependent on your business. Below is a list of issues that I believe affect large organizations:
Separation of duties Your existing company probably has separate networking and server operations teams. With EC2 it may be difficult to separate these concerns. ie. The guy defining your Security Groups (firewall) is probably the same person who can spin up servers.
Home access to your servers Corporate environments are usually administered on-premise or through a Virtual Private Network (VPN) with two-factor authentication. Administrators with access to your EC2 control panel can likely make changes to your environment from home. Note further that your EC2 access keys/accounts may remain available to people who leave or get fired from your company, making home access an even bigger problem...
Difficulty in validating security Some security controls may inadvertently become weak. Within your premises you can be 99% certain that all servers are behind a firewall that restricts any admin access from outside your premises. When you're in the cloud it's a lot more difficult to ensure such controls are in place for all your systems.
Appliances and specialized tools do not go in the cloud Specialized tools cannot go into the cloud. This may impact your security posture. For example, you may have some sort of network intrusion detection appliances sitting in front of on-premise servers, and you will not be able to move these into the cloud.
Legislation and Regulations I am not sure about regulations in your country, but you should be aware of cross-border issues. For example, running European systems on American EC2 soil may open your up to Patriot Act regulations. If you're dealing with credit card numbers or personally identifiable information then you may also have various issues to deal with if infrastructure is outside of your organization.
Organizational processes Who has access to EC2 and what can they do? Can someone spin up an Extra Large machine and install their own software? (Side note: Our company http://LabSlice.com actually adds policies to stop this from happening). How do you backup and restore data? Will you start replicating processes within your company simply because you've got a separate cloud infrastructure?
Auditing challenges Any auditing activities that you normally undertake may be complicated if data is in the cloud. A good example is PCI -- Can you actually always prove data is within your control if it's hosted outside of your environment somewhere in the ether?
Public/private connectivity is a challenge Do you ever need to mix data between your public and private environments? It can become a challenge to send data between these two environments, and to do so securely.
Monitoring and logging You will likely have central systems monitoring your internal environment and collecting logs from your servers. Will you be able to achieve the monitoring and log collection activities if you run servers off-premise?
Penetration testing Some companies run periodic penetration testing activities directly on public infrastructure. I may be mistaken, but I think that running pen testing against Amazon infrastructure is against their contract (which make sense, as they would only see public hacking activity against infrastructure they own).
I believe that EC2 is definitely a good idea for small/medium businesses. They are rarely encumbered by the above issues, and usually Amazon can offer better services than an SMB could achieve themselves. For large organizations EC2 can obviously raise some concerns and issues that are not easily dealt with.
Simon # http://blog.LabSlice.com
The main negative is that you are fully responsible for ALL server administration. Such as : Security patches, Firewall, Backup, server configuration and optimization.
Amazon will not provide you with any OS or higher level support.
If you would be FULLY comfortable running your own hardware then it can be a great cost savings.
i work in a company and we are hosting with amazon ec2, we are running one high cpu instance and two small instances.
i won't say amazon ec2 is good or bad but just will give you a list of experiences of time
reliability: bad. they have a lot of outages. only segments mostly but yeah...
cost: expensive. its cloud computing and not server hosting! a friend works in a company and they do complex calculations that every day have to be finished at a certain time sharp and the calculation time depends on the amount of data they get... they run some servers themselves and if it gets scarce, they kick in a bunch of ec2's.
thats the perfect use case but if you run a server 24/7 anways, you are better of with a dedicated rootserver
a dedicated root server will give you as well better performance. e.g. disk reads will be faster as it has a local disk!
traffic is expensive too
support: good and fast and flexible, thats definately very ok.
we had a big launch of a product and had a lot of press stuff going on and there were problems with the reverse dns for email sending. the amazon guys got them set up all ripe conform and nice in not time.
amazon s3 hosting service is nice too, if you need it
in europe i would suggest going for a german hosting provider, they have very good connectivity as well.
for example here:
http://www.hetzner.de/de/hosting/produkte_rootserver/eq4/
http://www.ovh.de/produkte/superplan_mini.xml
http://www.server4you.de/root-server/server-details.php?products=0
http://www.hosteurope.de/produkt/Dedicated-Server-Linux-L
http://www.klein-edv.de/rootserver.php
i have hosted with all of them and made good experiences. the best was definately hosteurope, but they are a bit more expensive.
i ran a CDN and had like 40 servers for two years there and never experienced ANY outage on ANY of them.
amazon had 3 outages in the last two months on our segments.
One minus that forced me to move away from Amazon EC2:
spamhaus.org lists whole Amazon EC2 block on the Policy Block List (PBL)
This means that all mail servers using spamhaus.org will report "blocked using zen.dnsbl" in your /var/log/mail.info when sending email.
The server I run uses email to register and reset passwords for users; this does not work any more.
Read more about it at Spamhaus: http://www.spamhaus.org/pbl/query/PBL361340
Summary: Need to send email? Do not use Amazon EC2.
The other con no one has mentioned:
With a stock EC2 server, if an instance goes down, it "goes away." Any information on the local disk is gone, and gone forever. You have the added responsibility of ensuring that any information you want to survive a server restart is persisted off of the EC2 instance (into S3, RDS, EBS, or some other off-server service).
I haven't tried Amazon EC2 in production, but I understand the appeal of it. My main issue with EC2 is that while it does provide a great and affordable way to move all the blinking lights in your server room to the cloud, they don't provide you with a higher level architecture to scale your application as demand increases. That is all left to you to figure out on your own.
This is not an issue for more experienced shops that can maintain all the needed infrastructure by themselves, but I think smaller shops are better served by something more along the lines of Microsoft's Azure or Google's AppEngine: Platforms that enforce constraints on your architecture in return for one-click scalability when you need it.
And I think the importance of quality support cannot be underestimated. Look at the BitBucket blog. It seems that for a while there every other post was about the downtime they had and the long hours it took for Amazon to get back to them with a resolution to their issues.
Compare that to Github, which uses the Rackspace cloud hosting service. I don't use Github, but I understand that they also have their share of downtime. Yet it doesn't seem that any of that downtime is attributed to Rackspace's slow customer support.
Two big pluses come to mind:
1) Cost - With Amazon EC2 you only pay for what you use and the prices are hard to beat. Being able to scale up quickly to meet demands and then later scale down and "return" the unneeded capacity is a huge win depending on your needs / use case.
2) Integration with other Amazon web services - this advantage is often overlooked. Having integration with Amazon SimpleDB or Amazon Relational Data Store means that your data can live separate from the computing power that EC2 provides. This is a huge win that sets EC2 apart from others.
Amazon cloud monitoring service and support is charged extra - the first one is quite useful and you should consider that and the second one too if your app is mission critical.