We have a RHEL 7.2 EC2 instance and we are trying to install Oracle 12C EE server. We have assigned an Elastic IP to the instance to make sure that the Public IP address does not change when we restart the server. But we saw that the hostname of the instance gets changed on a server restart.
Problem: There are a few steps in oracle installation where we need to mention the hostname of the EC2 instance (i.e. private DNS), so we are hardcoding the hostname during oracle installation. But the problem is if in case the hostname gets changed in every server restart then the installed software wont work (since it holds previous host name) - how to resolve this issue?
Please let us know on the best practices to resolve this issue.
IP addresses do not change in EC2 with a simple restart. They only change with a complete stop, followed later by a start. If you are using a VPC, which you most likely are, then the private IP address will not change even with a stop/start.
If you want a solution that will work even if you move the installation to a different EC2 instance, then you should create a Route53 private hosted zone, attach it to your VPC, and then create a custom DNS name for this server.
If you are using VPC (which is the default now) the private IP should not change upon restart or stop start.
My understanding is that you're having issue with hostname reset to the default ip-x-y-z-k upon os reboot causing issues with oracle database.
This is usually caused by cloud-init (embedded in the AMI).
I suggest you to go through these steps:
First set the hostname in your os:
$: hostnamectl set-hostname Your-New-Host-Name-Here --static
Edit your '/etc/hosts' to match the private IP:
<private_ip> <hostname>
Check the value of HOSTNAME in '/etc/sysconfig/network' it should match your hostname.
Finally, to solve the issue, I suggest to remove the following lines from '/etc/cloud/cloud.cfg'
set_hostname
update_hostname
update_etc_host
To test if it works stop and start the instance, the private IP should stay the same as before and the hostname should be the one you defined.
I hope this helps.
G.
Related
As far as I know, if you create an image from a running instance, it would by default reboot the instance. Do correct me if I am wrong on this.
For my situation, my free elastic ip are all used up and I need to do some heavy modification on the instance operating system. Before proceeding with those modifications, I would like to at least do a complete backup on everything. Which means I need to create an AMI and do snapshot on the EBS before proceeding. Problem is, I can't afford to lose the public and private IP address of that instance because it would take me more work to update all other softwares in different servers that would connect to it (unless of course if I mess it up and had to use the backup created AMI image after all).
So my questions are:
If I just simply create an image from that instance that is still running without stopping it. It will reboot by default, but would it change it's public and private IP addresses? I noticed that a normal "reboot" when you right click the instance does not change those IP address. Is it the same kind of "reboot" when you create image without stopping the instance?
Is it safer that I stop the instance first before creating an image or creating the image while it's running is safe enough? Consider data integrity.
Thank you
The default reboot during AMI creation will just do a normal reboot. It will not change IP addresses.
The Private IP address will never change.
The Public IP address might change if the instance is stopped.
Best practice is to either use an Elastic IP address (free if attached to a running instance, and you can request more if you need them) or use a DNS Name that resolves to an IP address. That way, if the IP address changes, simple update the DNS entry without needing to change any references.
I have installed a single node haodoop cluster on using Hortonworks/Ambari on Amazon's ec2 host.
Since I don't want this cluster running 24/7, I stop the instance when done. When I reboot the instance later, I get a new IP address and then ambari no longer is able to start the Hadoop related services.
Is there a way other than completely redeploying to reconfigure the cluster so the services will start?
It looks like the IP address lives in various xml files under /etc, in the postgres database table ambari, and possibly other places I haven't found yet.
I tried updating the xml files and postgres database with updated versions of the ip address, internal and external dns names as I could find them, but to no avail. I have not been able to restart the services.
The reason I am doing this is to possibly save the deployment time and data configuration on hdfs and other project specific setup each time I restart the host.
Any suggestions?
Thanks!
Elastic IP can be used. Also, since you mentioned it being a single node cluster - you can use localhost or private IP.
If you use elastic IP, your UIs will always be on the same public IP. However, if you use private IP or localhost and do not associate your instance with an elastic IP you will have to look for public IP everytime you start the instance and then connect to the web UI using the IP.
Thanks for the help, both Harman and TJ are correct. I haven't used an elastic IP because I might have more than one of these running and a time, and for now at least, I don't mind looking up the public ip address.
Harman's suggestion of using "localhost" as the fqdn when setting up ambari in the first place is a really good idea in retrospect. Unless I go through the whole setup again, that's water under the bridge for me, but I recommend this to others who might read this post.
In my case, I figured this out on my own before coming back to the page. The specific step I took was insanely simple after all, thanks to Occam's Razor.
I added the following line in /etc/hosts:
<new internal IP> <old internal dns name>
and then did
ambari-server restart. from the command line. Then I am able to restart all services after logging into ambari.
I've spent some time now looking for information regarding elasticsearch.yml configurations that make my single instance Elasticsearch (on Windows 2012 Server EC2) accessible via public ip, but everytime I uncomment one or both of following settings the only thing that changes is, calling the private ip as well results in an error.
network.publish_host: <public ip>
network.bind_host: <private ip>
Is this correct and are there any other settings that have to be defined? Shouldn't it run with the default values?
This is more of a general answer as to how networking works within EC2 instead of a specific answer to your question. But it should help inform how to configure your application.
EC2 has 1:1 NAT between a public and private IP address. Because of this, only the private IP address is visible to the instance directly.
If you are binding a service to a network interface, it would be the one with the private IP.
Some services do require knowledge of the external IP address in order to function properly. The only one I have run into is FTP in a passive configuration, likely due to the fact that it needs to open a separate socket for data transfer.
In the case of elastic search, it appears that they have a special plugin that will help configure elastic search for the aws environment: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-network.html
I had the same problem.
Installed only one instance of ES on aws EC2 and wanted to grant it public access.
On ubuntu 16.04 this is what works for me:
in /etc/elasticsearch/elasticsearch.yml add this line:
network.host: <ec2 instance private ip>
The private ip should be something like 172.x.x.x
Also do not forget allow access in security group in your aws console for port 9200 (default) and ip address from which you will be sending requests.
So difference was setting not public but private ip address from aws console..
Also note that this can be dangerous as there is not any user/password or other access control
This is probably incredibly simple and I'm just missing one step. The problem I was (originally) trying to solve was how to get a statically allocated hostname, one that would not change with each restart. I've done the following steps:
I have a domain registered on GoDaddy, and it points to my EIP. I use it to connect over SSH (putty) to my EC2 instance, so I know that part is working. I've opened ports 9080, 9060, 9043, and 9443 as well as SSH and FTP ports. And I've installed and started the software that uses those ports, and that stuff normally just works on a local RHEL install, so I think what's different here is the custom domain name.
I've added my EIP and fully qualified host name to my /etc/hosts file.
I've added my fully qualified host name to my /etc/hostname file and modified the /etc/rc.local script to set the hostname properly on a restart, and that works. If I execute the command hostname, it returns my fully qualified hostname, so that looks ok.
I cannot ping my server, but I think that's ok, because probably amazon blocks pings. So I don't think that's a symptom of anything.
I cannot open a to http://myserver.mydomain:9080/, which normally just works. Here it just times out.
If I do a wget http://myserver.mydomain:9080 from inside the EC2 instance, it returns failed: No Route To Host
But if I do a wget against localhost instead of the fully qualified name I get what I expect as a response.
So.... routing tables? Do those need to change? And if so how?
You probably don't want to do what you did. Everything in EC2 is NAT'd. Meaning that the IP assigned to your instance is a private/internal ip and the public IP is mapped to it by the routing system.
So internally, you want everything to resolve to the private IP, or you will get charged for traffic as it has to get routed to the edge and then route back in. Using the public DNS name will resolve correctly from the default DNS servers.
If you are using RHEL, you will need to make sure both the security group and the internal firewall (iptables) have ports opened. You could just disable the internal firewall since its a bit redundant with the security groups. On the other hand, it can provide some options security groups do not if you need them.
Amazon only gives you a certain number of static ip address and the local (private) ips of each EC2 instance can change when the machine is restarted. This makes creating a stable platform where EC2 instances depend on each other ridiculously hard to use as far as I can tell.
I've search online a lot about various solutions and so far have found nothing reasonable outside of assigning an elastic ip address on ever EC2 even if its not public facing. Does anyone have any other good ideas that is actually easy to execute on?
Thanks!
See the AWS team's response to question Static local IP:
The internal IP address of EC2 instances is allocated via DHCP. On
instance shutdown, or when the DHCP lease expires, the IP address is
returned to the general EC2 DHCP pool of addresses available for other
instances.
There is no way to guarantee that you will obtain the same DHCP
address across reboots.
Edit: The answer is to use Amazon VPC. There is no downside except a trivial amount of extra setup because now you control the router. It's a world apart from plain old EC2 instance on AWS. It's so necessary in fact that VPC will be enabled for all future AWS setups by default. See this post for more information: http://www.reddit.com/r/aws/comments/1a3n0r/ec2_update_virtual_private_clouds_for_everyone/
The stock answers are:
Use AWS VPC so you have complete control over instance addressing
Use Elastic IPs, which will resolve to the instance's local address (not the public, as you'd expect) when used to communicate between EC2 instances
I stumbled upon third option. There's ec2-ssh by the Instragram folks. It's a python shell script that you install globally and lets you both query the public dns of your ec2 instances by tag name and also ssh in via tag name as well.
The documentation for it is virtually nonexistent. I've written down the steps to install below:
To install ec2-ssh:
sudo yum install python-boto (python wrapper for ec2 api)
git clone https://github.com/Instagram/ec2-ssh
In your ~/.bash_profile set your AWS access key and secret like so:
export AWS_ACCESS_KEY_ID=XYZ123
export AWS_SECRET_ACCESS_KEY=XYZ123
cd into the bin folder of the repo, there will be two files:
ec2-host and ec2-ssh
copy them to your /usr/bin or /usr/local/bin.
Now you can do awesome stuff like:
$ ec2-host ZenWorker
ec2-999-xy-999-99.compute-1.amazonaws.com
and
$ ec2-ssh ZenWorker
Connecting to ec2-999-xy-999-99.compute-1.amazonaws.com.
Note that in your regular shell scripts you can use backticks to call these global tools. I've timed these calls and they take between 0.25 and 0.5 second using an EC2 instance, so that's really the only downside. Perhaps you can live with the delay, or use the fact that public DNS only changes for an instance on reboot to work up a solution.
Note that these two programs are commandline scripts and you don't need any Python knowledge to use them. For PHP fans, or those that also want an easy way to scp files without knowing the changing public DNS, you can checkout ec2dns.
I was in the same situation once. I still dont have the expertise to solve it properly. My ugly solution was to use elb not really for load balancing but just for the endpoint.
But I think a good solution can be obtained by using aws vpc.
Here's another Ruby solution for Updating Route 53 DNS from instance on AWS. You shouldn't reference raw 3rd party system IP addresses in your applications or server configurations.
you can change Ip Address using Elastic Ip:
You Can Do Using C# Code:
var associateRequest = new AssociateAddressRequest
{
PublicIp = your Elastic Ip,
InstanceId = Your Instance Id Which You Assign
};
amazonEc2Client.AssociateAddress(associateRequest);
after That DeAssociate It.
var disAssociateRequest = new isassociateAddressRequest(publicIp.ElasticIpAddress1);
AmazonEc2Client.DisassociateAddress(your Elastic Ip);
your Public Ip Will Change