Thought I'd let this out to anyone having similar problems.
Info: Running linux (ubuntu), x64, latest java-version (1.7.25), latest tomcat (7.40).
I had set up two apache tomcats locally, and the same webapp on both tomcats, to test some ehcache functionality and the cache replicated and all was great.
Once I set up a dedicated server with the second tomcat instance the EhCache didn't replicate between my PC and the server (still in the testing phase).
All the configs were basically the same without of course the ip-addresses being different.
As I wasn't using the hostname of the server and computer from the DNS, i instead had added hostnames into the /etc/hosts-file on both the pc and the server - thinking this would be sufficient.
After a couple of hours being a hater - grinding my teeth - I decided to remove all added entries to /etc/hosts, and instead use the boring hostnames in the DNS of our company (pc-103-15-87.xxxxx.com and pc-104-15-87.xxxxx.com ) and then everything worked.
Running 'wireshark' before I had the solution it showed me that the both ehcache-instances were chatting, but the ip-address '127.0.0.1' (probably used as some sort of callback address), was "mentioned" in the communication between both ehcache instances.
This led me to remove everything I had added in /etc/hosts and use the DNS resolvable names. Now everything in 'wireshark' showed me all the correct ip-addresses and everything was gold.
Hope this will help anyone with a similar problem.
XML:
The info for the peer:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=manual,
rmiUrls=//pc-103-15-87.xxxxx.com:40002/persons|//pc-103-15-87.xxxxx.com:40002/wordCache"/>
and the listener on this server:
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostname=pc-104-15-87.xxxxx.com, port=40002, socketTimeoutMillis=3000"/>
The solution was to use the hostnames that where in the DNS of the organization, and not rely on my own entries in /etc/hosts
Related
Despite ElasticSearch and Kibana both running on my production server, I'm unable to visit the GUI over the public IP: http://52.4.153.19:5601/
Localhost curls return 200 but console errors on the browser report timeouts after a few images are retrieved.
I've successfully installed, run, and accessed Kibana on my local (Windows 10) and on my staging AWS EC2 Ubuntu 14.04 environment. I'm able to access both over port 5601 on localhost and the staging environment is accessible over the public IP address and all domains addressed accordingly. The reverse proxy also works and all status indicators are green on the dashboard.
I'm running Kibana 4.5, ElasticSearch 2.3.1, Apache 2.4.12
I've used the same exact volume from the working environment to attach to the production instance, so everything is identical on the two volumes, except that the staging environment's apache vhost uses a subdomain while the production environment's servername is the base domain. Both are configured for SSL wildcards. Both are in separate availability zones at Amazon. I've tried altering the server block to use a subdomain on the production server, just to see if the domain was impactful but the error remains.
I also tried running one instance individually, in case EC2 had some kind of networking error with 0.0.0.0 but I'm unable to come to a resolution. All logs and configurations are identical between the two servers for ElasticSearch and Kibana.
I've tried deleting and re-creating the kibana index, tried alternate settings inclusive of the host, elasticsearch url, extending the max ping and timeout, max retries, extended the apache limits, http.cors to allow different origins. I've tried other ports but both servers are indicating that 5601 is listening in the same way.
I also had the same problem on a completely different volume that was previously attached to this instance.
The only difference I can see is that the working version pings fine while the non-working version has a 100% packet loss when pinging the IP, although I can't imagine why that would be, as I'm able to reach the website on 80, just fine. I can also access various other tools running on other ports. I assume there might be some kind of networking conflict. Any ideas?
May be port 5601 is blocked by firewall
Allow incoming connections to port 5601 by:
sudo iptables -I INPUT -p tcp --dport 5601 -j ACCESS
For security:
Modify above mentioned command and accept connection only from specific address. (See man iptables)
or use Shield plugin for elasticseach
Sorry, forgot to update this question. The answer turned out being that I simply needed to deploy a new instance. Simply by creating a clone of the instance, I was able to resolve the issue. I've had networking problems at AWS, before, with their internal dns/ip conflicts, so I've had to do so, in the past and this turned out to be the quickest and cleanest solution, albeit not providing any definitive insight into the cause.
I have installed a single node haodoop cluster on using Hortonworks/Ambari on Amazon's ec2 host.
Since I don't want this cluster running 24/7, I stop the instance when done. When I reboot the instance later, I get a new IP address and then ambari no longer is able to start the Hadoop related services.
Is there a way other than completely redeploying to reconfigure the cluster so the services will start?
It looks like the IP address lives in various xml files under /etc, in the postgres database table ambari, and possibly other places I haven't found yet.
I tried updating the xml files and postgres database with updated versions of the ip address, internal and external dns names as I could find them, but to no avail. I have not been able to restart the services.
The reason I am doing this is to possibly save the deployment time and data configuration on hdfs and other project specific setup each time I restart the host.
Any suggestions?
Thanks!
Elastic IP can be used. Also, since you mentioned it being a single node cluster - you can use localhost or private IP.
If you use elastic IP, your UIs will always be on the same public IP. However, if you use private IP or localhost and do not associate your instance with an elastic IP you will have to look for public IP everytime you start the instance and then connect to the web UI using the IP.
Thanks for the help, both Harman and TJ are correct. I haven't used an elastic IP because I might have more than one of these running and a time, and for now at least, I don't mind looking up the public ip address.
Harman's suggestion of using "localhost" as the fqdn when setting up ambari in the first place is a really good idea in retrospect. Unless I go through the whole setup again, that's water under the bridge for me, but I recommend this to others who might read this post.
In my case, I figured this out on my own before coming back to the page. The specific step I took was insanely simple after all, thanks to Occam's Razor.
I added the following line in /etc/hosts:
<new internal IP> <old internal dns name>
and then did
ambari-server restart. from the command line. Then I am able to restart all services after logging into ambari.
IN SHORT:
How would one create a local DNS cache on a linux system (ubuntu) so that common queries can run faster, and is it then possible to purge it?
The cache should be populated upon first-queries, not created by hand.
BACKGROUND:
There's a web server up in the cloud which makes connections to itself since the database is currently on the same (virtual)machine. To make it easier for future expansion where the database will be on another server, I've simply pointed the webserver at an address like database.example.com and set the DNS record to 127.0.0.1. The plan is that I can then simply change the DNS record once everything's migrated over. This might seem overkill with just web and database, but there will be other types of servers too (redis, node.js, etc.)
The problem is that when I use the hostname version, it is going very slow (5-10 seconds for session_start). When I use the IP address (i.e. 127.0.0.1), it is very fast (a couple milliseconds).
It seems clear to me that the problem is in DNS, and I believe local caching is a fine solution since it will allow me to manage it all in one place, rather than having to step through the different parts of the system and change configuration.
Install dnsmasq
apt-get install dnsmasq
Lock it down to only localhost add the following to /etc/dnsmasq.conf
listen-address=127.0.0.1
start your service and verify that it is running
service dnsmasq start
dig www.google.com #127.0.0.1
Edit /etc/resolv.conf add the following as your first line
nameserver 127.0.0.1
And remove options rotate if present.
Note you may have some scripts automatically rewriting / changing /etc/resolv.conf it which case you'll have to change those as well (ie dhclient or /etc/network/interfaces)
I set up a tomcat 7 cluster by including the -part in the server.xml.
In the Docs ( http://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html ) it says:
The IP broadcasted is java.net.InetAddress.getLocalHost().getHostAddress() (make sure you don't broadcast 127.0.0.1, this is a common error)
Unfortunately getLocalHost().getHostAddress() return 127.0.1.1 for all my virtual machines (Ubuntu run in Virtual Box under Win7) instead of the correct ip that I can reach the vm's with, ie 10.42.29.191.
Question:
Is there a way to tell tomcat what ip to send to other members of the cluster via the multicast? Or can I specify (e.g. code) a different way to obtain the ip?
Additional info:
My cluster seems to fail session replication and above "error" could be the cause of it. Glassfish doesn't do session replication either, maybe it's the same error. If you could give information for glassfish configuration regarding this I'd be glad too. The multicast between virtual machines works according to the tool iperf.
Since the vm is a Ubuntu machine, I had to edit the file /etc/hosts.
replace entry like this:
127.0.1.1 tim-VirtualBox
with the correct ip:
10.42.29.191 tim-VirtualBox
I worked over the last weeks with wso2 products using some of the tutorials which were posted on the wso2 site.
Unfortunately I only found tutorials, where all the products run on the same machine.
What do I have to do, if I want to run the products on different machines. I want a configuration where:
- ESB runs on machine 1
- AS and GREG run onmachine 2
- Proxy-services in the ESB or a web servcie in AS are invoked from machine 3
I run these examples on some macs, I think the main problem are the ports which are used. Can somebody help me with the configuration?
Can you elaborate your configurarion problem?
With this configuration you have to be sure that from one server you can ping the another servers and that in each server you have the ports 9443 and 9763 (by default) open to the network. this is the only requirement you need.
What you are trying is nothing new. In a typical production deployment each of the servers run in their own physical/virtual machines.
when you are calling a service, you calling an endpoint uniquely identified by IP address:port/contextPath
If the setup is in the same local machine the IP address would be 'localhost'.
First you have to learn the tcp/ip basics, the question is not related to wso2 servers IMHO.