What should I set as the hostname of Couchbase server? - amazon-ec2

In Couchbase 4.0, while initially setting up, I have a field to enter hostname.
I am working on amazon EC2 and use the public DNS as the hostname. This works but every time I shut and restart the instance, my public DNS changes, which renders the entire configuration useless. Can you please suggest what I should do?
Thanks in advance!

If you want to survive the scenario you give, the best option is to not use the EC2 generated public names, but create your own FQDNs in Route53 or another DNS service.

Related

Adding a CNAME to an AWS EC2 Public Domain Name

I have a test application running at
http://ec2-34-215-196-193.us-west-2.compute.amazonaws.com/
(This is a Test application, it wont be live for long. When I try to add a CNAME to this, like the screenshot below
. is added by the DNS system.
However, my app seems to be accessible only via us-west-2.compute.amazonaws.com or us-west-2.compute.amazonaws.com.
I can make it to resolve it either one of them.
But adding anything, does not seem to resolve with a CNAME. It gives 503 Service Unavailable.
I am using AWS EC2 to host the app with a HAProxy Load Balancer.
Using Google Domains for DNS Name.
Any suggestions for troubleshooting this problem?
All dns entries have a dot in the end like subdomain.domain.com.
It's not suggested to create CNAMEs to your ec2 instance because that IP may vary in time and it's not reassignable, that's what elastic ip's are made for, just create an elastic IP, assign it to your ec2 instance and assign it as an A record on your DNS provider.
Amazon AWS documentation
First create elastic IP and assign to your instance. Then create A record and point IP. Your site should work normal.

How to restart single node hadoop cluster on ec2

I have installed a single node haodoop cluster on using Hortonworks/Ambari on Amazon's ec2 host.
Since I don't want this cluster running 24/7, I stop the instance when done. When I reboot the instance later, I get a new IP address and then ambari no longer is able to start the Hadoop related services.
Is there a way other than completely redeploying to reconfigure the cluster so the services will start?
It looks like the IP address lives in various xml files under /etc, in the postgres database table ambari, and possibly other places I haven't found yet.
I tried updating the xml files and postgres database with updated versions of the ip address, internal and external dns names as I could find them, but to no avail. I have not been able to restart the services.
The reason I am doing this is to possibly save the deployment time and data configuration on hdfs and other project specific setup each time I restart the host.
Any suggestions?
Thanks!
Elastic IP can be used. Also, since you mentioned it being a single node cluster - you can use localhost or private IP.
If you use elastic IP, your UIs will always be on the same public IP. However, if you use private IP or localhost and do not associate your instance with an elastic IP you will have to look for public IP everytime you start the instance and then connect to the web UI using the IP.
Thanks for the help, both Harman and TJ are correct. I haven't used an elastic IP because I might have more than one of these running and a time, and for now at least, I don't mind looking up the public ip address.
Harman's suggestion of using "localhost" as the fqdn when setting up ambari in the first place is a really good idea in retrospect. Unless I go through the whole setup again, that's water under the bridge for me, but I recommend this to others who might read this post.
In my case, I figured this out on my own before coming back to the page. The specific step I took was insanely simple after all, thanks to Occam's Razor.
I added the following line in /etc/hosts:
<new internal IP> <old internal dns name>
and then did
ambari-server restart. from the command line. Then I am able to restart all services after logging into ambari.

DEIS no public IP's

Following the instructions here :
https://github.com/deis/deis/tree/master/contrib/ec2
to deploy Deis to EC2 into a VPC, Cloudformation stack start up and creates the instances, however the instances does not have public IP's, the subnet the instances are launched into does have auto assign public IP's enabled.
So, without the public IP's I am not sure how to connect to the instances with fleet.
Anyone have any idea's on what I am missing?
By default, the provision scripts don't assign public IP addresses because the assumption is that the VPC you're provisioning into is internal to your network and that you have other means of access (like VPN).
However, you can easily provision your instances with public IPs by changing this line to True and redeploying.
We know this is confusing, and we're working to rewrite our EC2 provisioning scripts. Thanks for sticking with us!
You need to get your computer connected into the VPC somehow, try this and see if you can VPN into your VPC using it.

Can an Amazon EC2 Instance access another Instance by Private IP?

I have two separate instances in my test scenario
Web Server Instance
Database Server Instance
So far the only way I can get from 1st to 2nd Instance is by having Elastic IP's configured and using the Public DNS (or IP) reference. I can limit unwanted access by configuring the Security Group for 2nd to only take Port 1433 traffic only from 1st.
It seems like Instances within the same Amazon AWS zone should be able to talk to each other more efficiently than first going out and then coming back in.
Is there a way to go directly from 1st to 2nd instance using just the Private DNS (or IP)?
If you are using the Amazon Public DNS name, Amazon makes sure that all internal traffic gets routed internally only. So there is no problem in using the public DNS names. Have a look at this question and this article for more details.

Solution for local ip changes of AWS EC2 instances

Amazon only gives you a certain number of static ip address and the local (private) ips of each EC2 instance can change when the machine is restarted. This makes creating a stable platform where EC2 instances depend on each other ridiculously hard to use as far as I can tell.
I've search online a lot about various solutions and so far have found nothing reasonable outside of assigning an elastic ip address on ever EC2 even if its not public facing. Does anyone have any other good ideas that is actually easy to execute on?
Thanks!
See the AWS team's response to question Static local IP:
The internal IP address of EC2 instances is allocated via DHCP. On
instance shutdown, or when the DHCP lease expires, the IP address is
returned to the general EC2 DHCP pool of addresses available for other
instances.
There is no way to guarantee that you will obtain the same DHCP
address across reboots.
Edit: The answer is to use Amazon VPC. There is no downside except a trivial amount of extra setup because now you control the router. It's a world apart from plain old EC2 instance on AWS. It's so necessary in fact that VPC will be enabled for all future AWS setups by default. See this post for more information: http://www.reddit.com/r/aws/comments/1a3n0r/ec2_update_virtual_private_clouds_for_everyone/
The stock answers are:
Use AWS VPC so you have complete control over instance addressing
Use Elastic IPs, which will resolve to the instance's local address (not the public, as you'd expect) when used to communicate between EC2 instances
I stumbled upon third option. There's ec2-ssh by the Instragram folks. It's a python shell script that you install globally and lets you both query the public dns of your ec2 instances by tag name and also ssh in via tag name as well.
The documentation for it is virtually nonexistent. I've written down the steps to install below:
To install ec2-ssh:
sudo yum install python-boto (python wrapper for ec2 api)
git clone https://github.com/Instagram/ec2-ssh
In your ~/.bash_profile set your AWS access key and secret like so:
export AWS_ACCESS_KEY_ID=XYZ123
export AWS_SECRET_ACCESS_KEY=XYZ123
cd into the bin folder of the repo, there will be two files:
ec2-host and ec2-ssh
copy them to your /usr/bin or /usr/local/bin.
Now you can do awesome stuff like:
$ ec2-host ZenWorker
ec2-999-xy-999-99.compute-1.amazonaws.com
and
$ ec2-ssh ZenWorker
Connecting to ec2-999-xy-999-99.compute-1.amazonaws.com.
Note that in your regular shell scripts you can use backticks to call these global tools. I've timed these calls and they take between 0.25 and 0.5 second using an EC2 instance, so that's really the only downside. Perhaps you can live with the delay, or use the fact that public DNS only changes for an instance on reboot to work up a solution.
Note that these two programs are commandline scripts and you don't need any Python knowledge to use them. For PHP fans, or those that also want an easy way to scp files without knowing the changing public DNS, you can checkout ec2dns.
I was in the same situation once. I still dont have the expertise to solve it properly. My ugly solution was to use elb not really for load balancing but just for the endpoint.
But I think a good solution can be obtained by using aws vpc.
Here's another Ruby solution for Updating Route 53 DNS from instance on AWS. You shouldn't reference raw 3rd party system IP addresses in your applications or server configurations.
you can change Ip Address using Elastic Ip:
You Can Do Using C# Code:
var associateRequest = new AssociateAddressRequest
{
PublicIp = your Elastic Ip,
InstanceId = Your Instance Id Which You Assign
};
amazonEc2Client.AssociateAddress(associateRequest);
after That DeAssociate It.
var disAssociateRequest = new isassociateAddressRequest(publicIp.ElasticIpAddress1);
AmazonEc2Client.DisassociateAddress(your Elastic Ip);
your Public Ip Will Change

Resources