Getting information about deployment from within an instance of AWS Elastic Beanstalk - amazon-ec2

My specific need is to get the list of EC2 instances in the deployment from within one of the instances.
I've tried using AWS command line for example aws elb describe-load-balancers however it would just give details of all my AWS services. I know you can specify an instances name with --load-balancer-name but I just don't have access to that from within the instance automatically.
Perhaps a file can be created on instance creation by placing something in .ebextensions?

You can do it in a two step process using the AWS CLI.
First you get the endpoint for your Elastic Beanstalk application:
aws elasticbeanstalk describe-environments --query='Environments[?ApplicationName==`Your-application-name`].EndpointURL'
Then you use the endpoint to get the instances:
aws elb describe-load-balancers --query='LoadBalancerDescriptions[?DNSName==`load-balancer-end-point-from-previous-step`].Instances[0]'

Related

Launch EMR cluster via Lambda inside a VPC using boto3

I am trying to launch an EMR cluster using AWS Lambda code written with boto3 and python. The Lambda is able to launch the cluster when there is no VPC configuration associated it. However, as soon as I add the VPC config it fails to launch the cluster and errors out and does not provide any error message.
I am trying to launch the lambda inside a default VPC and it has 3 public subnets and a default security group. I have checked the route table in the VPC is associated with an internet gateway and it is attached to the VPC.
The execution role provides full access to the cloudwatch elasticmapreduce and ec2 actions.
Any help in resolving this school boy error will be much appreciated.

Launching ECS service from our own AMI

I am trying to deploy my sample Spring Cloud Microservice into AWS ECS service. I found that Fargate method and EC2 launch method. Here actually I am looking for to launch ECS service from my own EC2 instance. Now I have only Ubuntu 16.04 AMI. I am planning to use AWS ECS optimized AMI as my EC2. So I need to launch ECS using my own EC2. So I am confused about the launching by optimized my own EC2.
I am seeking useful links or documentation for launching using above method. Since I am beginning stage on AWS Cloud.
The AMI you've configured for your instance doesn't matter (generally). Once your EC2 instance is created, go over to the ECS section of AWS and create a cluster containing your host.
In ECS you need to define a task containing your container, the repo to pull it from, and all the other necessary details. From here you can go to your cluster and launch your task on your host, either manually, or by defining a service to automate the launching for you.

Launching Resources for my aws Pipeline into a VPC, got Resource not healthy: TERMINATED

I have used aws data pipeline to execute my bash shell tasks without any problems, in this case, aws data pipeline uses default EC2 instance to execute my bash shell works.
Right now, I want to use aws data pipeline to connect my customized EC2instance ( vpc ) without using aws default EC2 instance to run my shell tasks by following the following two links, but got error :"Resource not healthy: TERMINATED" ( failureReason )
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-task-runner-user-managed.html
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-resources-vpc.html
Based on aws document (step 4): Task Runner connects to the AWS Data Pipeline web service using HTTPS. If you are using an AWS resource, ensure that HTTPS is enabled in the appropriate routing table and subnet ACL. If you are using a firewall or proxy, ensure that port 443 is open.
Do I need to run a small HTPPS service ( dummy is fine ) on my customized EC2 instance machine?. The aws document is not clear about it.
Any suggestions, advice will be appreciated greatly!!
I figured out the solution by trying this: just opening port 80 to secure group too on my customized EC2 ( vpc ) instance. Now data pipeline can connect my customized EC2 and worked well:)

How to access log files of neo4j deployed to aws ec2

I have deployed neo4j to ec2 using this https://github.com/neo4j-contrib/ec2neo
I am getting 503 service not available error. How can I access the neo4j logs on ec2. Can anybody help please.
The steps to access the logs are given in: ec2neo Output
Select the CloudFormation stack that you used to create the instance. Click on Outputs tab. It will give you the actual ssh command to use to access the EC2 instance.

How to create an Amazon VPC using AWS CloudFormation?

I am currently using AWS CloudFormation for my application. Right now I am able to auto scale the instances. Now, I want to put every thing on an Amazon VPC. Can we create a VPC using CloudFormation? And how can we manage the Elastic IP address via CloudFormation, when we have an ELB in the template? I have found a VPC related example on AWS CloudFormation Sample Templates, but it only provisions resources into an existing VPC and doesn't create a new one in the template.
Update
As pointed out by Jeff already (+1), AWS has just announced AWS CloudFormation Support for Creating VPC Resources as of April 25, 2012, covering the missing piece of their initial VPC support:
We are excited to announce that AWS CloudFormation now supports the
creation of Amazon Virtual Private Cloud (VPC) resources. [...]
Now, you can create new Virtual Private Clouds (VPC), subnets,
gateways, network ACLs, routes and route tables using CloudFormation
templates. [...]
[...] A CloudFormation can now fully represent your VPC configuration
along with all the resources needed to run your application in the
VPC.
See Jeff Barr's introductory post AWS CloudFormation Can Now Create Virtual Private Clouds for more details and examples. In particular, the AWS CloudFormation Sample Templates feature two new sample templates [...] to get you started as well:
VPC with a single EC2 Instance - Sample template showing how to create a VPC and add an EC2 instance with an Elastic IP address and a security group.
VPC with public and private subnets, an Elastic load Balancer, and an EC2 instance - Sample template showing how to create a VPC with multiple subnets. The first subnet is public and contains the load balancer, the second subnet is private and contains an EC2 instance behind the load balancer.
Initial Answer
I don't think creating an Amazon VPC with AWS CloudFormation is already supported.
While AWS has just announced AWS CloudFormation Support For VPC as of February 12, 2012 indeed, this covers existing resource types only:
All resource types such as Amazon EC2 instances, security groups and
Elastic IP addresses, Elastic Load Balancers, Auto Scaling Groups and
Amazon RDS Database instances can now be deployed into any existing
Amazon VPC using CloudFormation templates. The templates allow you to
run multi-tier web applications and corporate applications in a
private network. With Amazon VPC and CloudFormation, you can easily
control which resources you want to expose publicly and which ones
should be private.
Amazon VPC is notably absent from this list, which matches the fact that it isn't listed in the supported AWS Resource Types Reference either.
It's supported now: see AWS CloudFormation Support for Creating VPC Resources for details.

Resources