My objective is to setup automatic node recovery upon failover. I came across ec2-autoscale-reactor formula in Salt that ties with Autoscaling Groups in AWS.
This mandates AWS SNS Service to issue notifications over HTTP(s) to Salt Master when a node is created or deleted. However, AWS SNS needs a publicly-open host to send notifictions; hence this may not be a solution as my Salt Master shall be local within VPC.
Is there any other option in Salt using which I may achieve automatic-node-healing for components in AWS?
Related
I'm working for a client that has a simple enough problem:
They have EC2s in two different Regions/VPCs that are hosting microservices. Up to this point all EC2s only needed to communicate with EC2 instances that were in the same subnet, but now we need to provision our infrastructure so that specific ec2s in VPC A's public subnet can call specific ec2s in VPC B's public subnet (and vice versa). Communications would be calling restful APIs over over HTTPS/TLS 2.0
This is nothing revolutionary but IT moves slowly and I want to create a Terraform proof of concept that:
Creates two VPCs
Creates a public subnet in each
Creates an EC2 in each
Installs httpd in the EC2 along with a Cert to use SSL/TLS
Creates the proper security groups so that only IPs associated with the specific instance can call the relevant service
There is no containerization at this client, just individual EC2s for each app with 1 or 2 backups to distribute the load. I'm working with terraform so I can submit different ideas to them for consideration, such as using VPC Peering, Elastic IPs, NAT Gateways, etc.
I can see how to use Terraform to make these infrastructural changes, but I'm not sure how to create EC2s that install a server that can use a temp cert to demonstrate HTTPS traffic. I see a tech called Packer, but was also thinking I should just create a custom AMI that does this.
What would the best solution be? This doesn't have to be production-ready so I'm favoring creating a fast stable proof-of-concept.
I would use the EC2 user_data option in Terraform to install httpd and create your SSL cert. Packer is great if you want to create AMIs to spin up, but since this is an POC and you are not doing any complex configuration that would take long to perform, I would just use user_data.
I launched one RDS instance,s3 and EC2 in AWS and its is triggered properly using lambda. Now I wish to change the change the RDS and EC2 from AWS to local machine. My lambda is triggered from s3.
How do I connect the local database through lambda in AWS?
It appears that your requirement is:
You wish to run an AWS Lambda function
Within the function, you wish to connect to a database running on your own computer (outside of AWS)
Firstly, I would not recommend this strategy. To maintain good performance, you should always have an application as close as possible to the database. This means on the same network, in the same location and not going across remote network connections or the Internet.
However, if you wish to do this, then here's some things you would need to do:
Your database will need to be accessible on the Internet, so that you can connect to it remotely. To test this, try accessing it from an Amazon EC2 instance.
The AWS Lambda function should either be configured without VPC connectivity (which means that it is connected to the Internet) or, if you have configured it for VPC connectivity, it needs to be in a Private Subnet with a NAT Gateway enabling Internet access.
(Optional) For added security, you could lock-down your database to only accept connections from a known IP address. To achieve this, you would need to use the VPC + NAT Gateway so that all traffic is coming from the Elastic IP address assigned to the NAT Gateway.
I agree with John Rotenstein that connecting your local machine to a Lambda running on AWS is probably a bad idea.
If your intention is to develop or test locally, I recommend the serverless framework, and the serverless-offline plugin. It will allow you to simulate Lambda locally, and you can pass database config values through as environment variables.
See: Running AWS Lambda and API Gateway locally: serverless-offline
I've had to update our DataStax OpsCenter service from 5.1.3 to 5.2.1 because when I tried to add a new node to our Cassandra Cluster (DSE 4.6.6 on Amazon EC2) it said that "No nodes were available to retrieve a configuration from."
Updating to 5.2.1 fixed that issue, but it created a new one; now the VPC ID doesn't allow me to create nodes on the EC2 default network, it's not shown entirely, it only allows me to choose one of the 2 VPC's we already have.
It seems that they've added VPC support on 5.2.0...
Support for AWS VPC when creating clusters/nodes in the EC2 Cloud.
(OPSC-3429)
http://docs.datastax.com/en/opscenter/5.2/opsc/release_notes/opscReleaseNotes520.html
But it shouldn't be mandatory, since on the "Add node" documentation it also says the following...
A list of current VPC IDs in the region in which the current OpsCenter instance is built. Both the VPC ID and the network range are displayed. The AWS default VPC ID is also indicated. The VPC ID list is prepopulated after entering your EC2 credentials. An AWS VPC (Amazon Web Services Virtual Private Cloud) holds one contiguous network range that can be subdivided into subnets.
http://docs.datastax.com/en/opscenter/5.2//opsc/online_help/opscAddNodeCloudCluster.html
I need to create nodes on the default EC2 network, since the other nodes are already there and migrating to VPC is not an option for us right now (perhaps in a not-so-near future we'll have time to do so).
Is this a bug on OpsCenter 5.2.1?
Something that I need to update on our settings?
Is there a workaround to create the server in the old-way? (ie, downgrade and fix the configuration node error somehow)
OpsCenter developer here, I wrote the VPC feature. Can you screenshot your add-node form with the VPC dropdown expanded (feel free to black out any aws-id's or information you consider sensitive)?
One clarification which may be obvious or may be the root of your problem. The "default EC2 network" aka "EC2 classic" is in a VPC, which OpsCenter (and much AWS documentation) calls the default VPC. This should be represented in the VPC dropdown as something like "vpc-123456 10.0.0.0/24 (default)". If you select this VPC, you're not "migrating to VPC" or launching nodes in a different place. You're launching in what is variously called "EC2 classic" or "default vpc" which is where your existing nodes currently exist. You can verify this by comparing the network listed, which should match the network that your nodes are already launched in.
If you're not seeing any VPC option labelled as "default" then you may have stumbled on a bug.
I am planning to migrate from Ec2 classic to EC2 VPC. My application reads messages from SQS, download assets from S3 and perform actions mentioned in the SQS messages and then updates RDS. I have following queries
Is it beneficial for me to migrate to Amazon VPC from Classic
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy code using capistrano. But in VPC there is a concept of private IP address and you cannot access a machine inside a subnet.So my question is:
How should I deploy code on the EC2 instances or rather how should I connect to them?
Thank You.
This questions is pretty broad but I'll take stab at it:
Is it beneficial for me to migrate to Amazon VPC from Classic
It's beneficial if you care about security of your data in transit and at rest. In a VPC none of your traffic is exposed to the outside and you can chose which components you want to expose in case you want to receive traffic/data from the outside. i.e Your ELB or ELBs.
I create my EC2 machines using ruby scripts, and deploy code on them using capistrano. In classic mode I used the IP address to deploy
code using capistrano. But in VPC there is a concept of private IP
address and you cannot access a machine inside a subnet. So my question
is: How should I deploy code on the EC2 instances or rather how should
I connect to them?
You can actually assign a public IP to your EC2 machines in a VPC if you choose to. You can use that IP to deploy your code from the outside.
You can read about it here: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html
If you want more security you can always deploy from a machine in your VPC (that has SSH access to the outside). You can ssh to that machine and then run cap deploy from there.
Is there any way to make new instances added to an autoscaling group associate with an elastic IP? I have a use case where the instances in my autoscale group need to be whitelisted on remote servers, so they need to have predictable IPs.
I realize there are ways to do this programmatically using the API, but I'm wondering if there's any other way. It seems like CloudFormation may be able to do this.
You can associate an Elastic IP to ASG instances using manual or scripted API calls just as you would any other instance -- however, there is no automated way to do this. ASG instances are designed to be ephemeral/disposable, and Elastic IP association goes against this philosophy.
To solve your problem re: whitelisting, you have a few options:
If the system that requires predictable source IPs is on EC2 and under your control, you can disable IP restrictions and use EC2 security groups to secure traffic instead
If the system is not under your control, you can set up a proxy server with an Elastic IP and have your ASG instances use the proxy for outbound traffic
You can use http://aws.amazon.com/vpc/ to gain complete control over instance addressing, including network egress IPs -- though this can be time consuming
There are 3 approaches I could find to doing this. Cloud Formation will just automate it but you need to understand what's going on first.
1.-As #gabrtv mentioned use VPC, this lends itself to two options.
1.1-Within a VPC use a NAT Gateway to route all traffic in and out of the Gateway. The Gateway will have an Elastic IP and internet traffic then whitelist the NAT Gateway on your server side. Look for NAT gateway on AWS documentation.
1.2-Create a Virtual Private Gateway/VPN connection to your backend servers in your datacenter and route traffic through that.
1.2.a-Create your instances within a DEDICATED private subnet.
1.2.b-Whitelist the entire subnet on your side, any request from that subnet will be allowed in.
1.2.c Make sure your routes in the Subnet are correct.
(I'm skipping 2 on purpose since that is 1.2)
3.-The LAZY way:
Utilize AWS Opsworks to do two things:
1st: Allocate a RESOURCE Pool of Elastic IPs.
2nd: Start LOAD instances on demand and AUTO assign them one elastic ip from the Pool.
For the second part you will need to have the 24/7 instances be your minimum and the Load instances be your MAX. AWS Opsworks now allows Cloud Watch alarms to trigger instance startup so it is very similar to ASG.
The only disadvantage of Opsworks is that instances aren't terminated but stopped instead when the load goes down and that you must "create" instances beforehand. Also you depend on Chef solo to initiate your instances but is the only way to get auto assigning EIPs to your newly created instances that I could find.
Cheers!