Why does cfn-init repond with this odd message? - windows

I have trouble getting cfn-init to work on Windows.
I do this:
cfn-init.exe -v -c config
-s arn:aws:cloudformation:eu-north-1:894422057177:stack/Providence-Core-A47K89HAVG6V/20f830c0-05cd-12ea-9527-06c34fc32621
-r MyHost
--region eu-north-1
(line breaks added for clarity)
and get as a result:
('Connection aborted.', error(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
What causes this error? What is it that cannot be reached? (it is not the most verbose error message I've seen :-))
Is cfn-init.exe actually trying to access something on the network? If so, what target address?
My outbound rules are fairly restrictive both when it comes to Network ACL and SecurityGroup. They don't have general outbound access for http/https. Is that the reason?

Bottom line: yes, the cfn-init command indeed does an outbound https request. Your SecurityGroup, Subnet ACLs, etc, must allow this.
The cfn-init command attempts to download the relevant CloudFormation stack from the AWS CloudFormation endpoint which is on Public Internet. Therefore, if the cfn-init feature is used, the EC2 instance must have outbound access to such endpoint.
If you don't want to grant generic outbound access for your EC2 instance then Amazon offers a VPC Endpoint for the AWS CloudFormation service.

Related

Unable to configure proxy for Terraform with AWS provider

We are running Terraform v12.20 to provision infrastructure in AWS. We have installed Terraform on an EC2 instance and we need to have our corporate proxy configured in order to communicate with services outside our network. We have sts.amazonaws.com configured in our no_proxy. Terraform is not respecting the proxy configured in the environment variables because of which it's timing out trying to connect to sts.amazonaws.com. Here is the proxy that's configured on the instance.
http_proxy=XXX:YYY
https_proxy=XXX:YYY
HTTPS_PROXY=XXX:YYY
no_proxy=sts.amazonaws.com
NO_PROXY=sts.amazonaws.com
HTTP_PROXY=XXX:YYY
This is the error I'm getting when trying to run terraform init.
error validating provider credentials: error calling sts:GetCallerIdentity: RequestError: send request failed. caused by: Post https://sts.amazonaws.com/: dial tcp 54.239.21.217:443: i/o timeout
Can someone help me configure proxy with terraform?
Thank you.
It looks like it's doing exactly what you told it to. You say your environment requires an HTTP proxy to access the internet but you've put sts.amazonaws.com into no_proxy, which is the environment variable for sites you explicitly do not wish to proxy - hence terraform is not using your proxy to go to sts.amazonaws.com and it is failing. Simply put, remove sts.amazonaws.com from your no_proxy variable.

Forward Traffic from Windows EC2 Instance to ElasticSearch VPC Endpoint

I have Windows EC2 instance I use for my public-facing C# API. The VPC(and related Internet Gateway, subnets, etc) are all default.
I've now setup an AWS ElasticSearch service using their more secure VPC Endpoint option (instead of public-facing) and I've associated it to the same subnet and vpc as my above Windows EC2 instance.
I'd like to get them to talk to each other.
Reading from https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html
It seems what you'd do is ssh tunnel / port forward traffic from localhost:9200 on the EC2 instance to the actual Elastic Search service (via that VPC endpoint).
It seems this command is where the magic happens:
ssh -i ~/.ssh/your-key.pem ec2-user#your-ec2-instance-public-ip -N -L 9200:vpc-your-amazon-es-domain.region.es.amazonaws.com:443
but that is for a Linux EC2 instance.
If I am Remote Desktopped into my Windows EC2 instance (the API), how can I make it so when I go to a browser, http://localhost:9200
will send traffic to my VPC Endpoint:
vpc-your-amazon-es-domain.region.es.amazonaws.com:443
Thanks!
Alright, so I'll answer my two questions:
First, it's actually quite easy, just RDP to your box and access the instance directly via the VPC endpoint. You don't need to do anything wacky like port forwarding using the netsh command or anything like that. Simply make sure the server (in my case my API) is on the same VPC and you're fine. I just had an error in my connection string that's why it didn't connect. To confirm, I RDP'D in and was able to hit the endpoint directly in a browser on port 80. While it's true the actual Elasticsearch runs on port 9200, you don't need to forward to localhost:9200 --> vpc:9200.
Now, regarding the second question, about hitting it locally, I think the problem is that because this service lacks a public IP address and you can't access it, that you can go through some complicated setup on AWS, or easier is just set it up to run locally for now until you are ready to use the VPC one (and thus your code will just run). Another option is to use security groups and make a publicly accessible cluster for now, and then when your code is done, search service/layer done, etc, you can start anew with a VPC/secure Elasticsearch service and that should be it.
Another thing that many mention is that it is cheaper/you have more control of things if you setup your own Elasticsearch on your local machine, and then set one up on EC2 (this is just reading blogs and seeing people mention how much frustration they had with it).

Oracle installation failed on AWS redhat EC2 instance

I'm trying to install oracle on AWS redhat instance. Follow the steps given on this url: http://www.davidghedini.com/pg/entry/install_oracle_11g_xe_on And when I run config command as follows
/etc/init.d/oracle-xe configure
It gives following error.
Database Configuration failed. Look into
/u01/app/oracle/product/11.2.0/xe/config/log for details
When I check the log files it shows following errors.
ORA-01034: ORACLE not available Process ID: 0 Session ID: 0 Serial
number: 0
It seems specific issue on AWS cloud instance.
Is it because of swap memory?
Or is it because of port issue?
I'm using micro instance on it.
How can I get through?
this might be an EC2 security group issue and outbound access to the network on some port being used by the installer (license check, maybe?).
if your EC2 instance is very tightly locked down, you could test if it's a security group issue by adding a new Outbound security group rule to allow all TCP traffic out to anywhere on the internet (0.0.0.0/0)
for example, the install might be trying to hit a remote licensing server endpoint via HTTP or HTTPS but your security group doesn't allow that traffic out.
perhaps there's a 'verbose' flag that you can run the installer with that can give you more info about what it's failing on? HTH

Connect to RabbitMQ on EC2 from external client

Similar questions have been asked
RabbitMQ on Amazon EC2 Instance & Locally?
and
cant connect from my desktop to rabbitmq on ec2
But they get different error messages.
I have a RabbitMQ server running on my linux EC2 instance which is set up correctly. I have created custom users and given them permissions to read/write to queues. Using a local client I am able to correctly receive messages. I have set up the security groups on EC2 so that ports (5672/25672) are open and can telnet to those ports. I also have set up rabbitmq.conf like this.
[
{rabbit, [
{tcp_listeners, [{"0.0.0.0",5672}]},
{loopback_users, []},
{log_levels, [{connection, info}]}
]
}
].
At the moment I have a client on the server publishing to the queue.
I have another client running on a server outside of EC2 which needs to consume data from the same queue (I can't run both on EC2 as the consume does a lot of plotting/graphical manipulation).
When I try to connect however from the external client using some test code
try {
ConnectionFactory factory = new ConnectionFactory();
factory.setUri("amqp://****:****#****:5672/");
connection = factory.newConnection();
} catch (IOException e) {
e.printStackTrace();
}
I get the following error.
com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED -
Login was refused using authentication mechanism PLAIN. For details
see the broker logfile.
However there is nothing in the broker logfile as if I never tried to connect.
I've tried connecting using the individual getter/setter methods of factory, I've tried using different ports (along with opening them up).
I was wondering if I need to use SSL or not to connect to EC2 but from reading around the web it seems like it should just work but I'm not exactly sure. I cannot find any examples of people successfully achieving what I'm trying to do and documenting it.
Thanks in advance
The answer was simply that I needed to specify the host to be the same IP I use to SSH into. I was trying to use the Elastic IP/public dns of the instance of the EC2 instance which I thought should point to the same machine.
Although I did try many things including setting up an SSL connection it was not necessary.
All that is needed is:
Create rabbitmq user using rabbitmqctrl and give it appropriate permissions
Open the needed ports on EC2 via Security Groups menu (default is 5672)
Use client library to connect to correct host name/username/password/port where the host name is the same as the machine that you normally SSH into.

Amazon Instance Ec2 Connection Timeout

I am using Amazon EC2 services & and its working correctly but suddenly from 3-days before when we try to access our instance using ssh connection we got following error:
"ssh: connect to host ec2----***.compute-1.amazonaws.com port **: Connection timed out"
when I try to access our sites deployed on our EC2 instance, I received the same error ,
"The connection has timed out
The server at ec2----***.compute-1.amazonaws.com is taking too long to respond"
there is no problem in network connection from our side as we are able to access other web site and services smoothly.
I can't even able to access hosted site without this.
I encountered the same problem.
I followed the troubleshooting in http://alestic.com/2010/05/ec2-move-ebs-boot-instance
Then when I tried to start a new instance, I got an message from Amazon:
Server.InsufficientInstanceCapacity: We currently do not have sufficient m1.small capacity in the Availability Zone you requested (us-east-1b). Our system will be working on provisioning additional capacity. You can currently get m1.small capacity by not specifying an Availability Zone in your request or choosing us-east-1d, us-east-1c, us-east-1a.
Maybe, you have an instance is us-east-1b, too.
You can try to access the System Console (either via the amazon web console or elasticfox) and check for any errors/messages that might help you arrive at the cause of this.
In ~/.ssh/config, add the following lines:
ServerAliveInterval 50
This will keep on pinging the server every 50 seconds to keep the connection alive.

Resources