The connection credentials of initial (original box which has been cloned) cluster queue manager keeps on existing in the new cloned box even after deleting the channels.
Related
Two of my drives crashed on the Ambari Server node so I have to re-migrate my Ambari Cluster. No real data was lost (due to a different backup strategy) but the configuration files of the node, including Ambari Server configuration, are gone.
Because two drives crashed, I can not access any files from that node anymore (RAID 5).
I am now in the process of reinstalling the Ambari Server on the same node and would like to have my agents seamlessly reconnect to the "new" Ambari Server.
Is there a way to migrate the existing Cluster settings to the Ambari Server? I am thinking of Cluster settings that were distributed to the agents or similar.
If there is no such way to migrate the cluster, how would I go and install the Ambari Server? Do a fresh install and setup everything again? Will the Ambari agents be able to connect to the "new" Cluster without problems? Note that the Ambari Server will run on the same hostname/ip.
We have 5 node hortonworks cluster with ambari monitors installed in all nodes and metrics collector installed in master node.
I am getting Connection failed: [Errno 111] Connection refused to 0.0.0.0:6188
PFA for error.
https://drive.google.com/file/d/0B85rPUe3-QPXbXJSRzJmdUwwQU0/view?usp=sharing
I followed the below document and tried removing the service and added it.
https://cwiki.apache.org/confluence/display/AMBARI/Moving+Metrics+Collector+to+a+new+host
First of all, I am not able to find the origin of the error. Please share your experience if you ever faced this problem.
This happens sometimes that port is already being use by another process when we try to move collector to new host with 'Curl' commands specified on apache wiki.
Istead of doing using this you can leverage the feature which Ambari provides from it's GUI to move components from one host to another host .
'Move Master Wizard'
Follow the steps stated at Move Master Wizard , Ambari will take care rest of things for you.
I have fixed this issue by killing the process running in that port and restart the service. You can also do a manual reboot of the machine to fix this issue.
I am trying to add a new Spark node to an existing DSE 4.8.4 cluster running on EC2. I use following settings:
AMI = ami-50520e27
AMI username = ubuntu
Use OpsCenter specific security group
Key file using by the existing cluster
The machine spins up, but in OpsCenter > Activities > Add Nodes there is a message saying "Determining SSH fingerprints" which stays there forever.
I have the following situation:
A private enterprise network with a Icinga2 master, monitoring the internal servers. The firewall blocks all inbound access, however all servers to have outbound internet access (multiple protocols, such as SSH, HTTP, HTTPS).
We also have an environment in Azure with 1 publicly accessable VM (nginx) and behind that, in a private network, application servers. I'd also like to monitor these servers. I read that I can set up a Icinga2 satellite (in Azure), that monitors the Azure environment and sends the data to the master.
This would be a great solution. However, my master is in our private enterprise network, so the Icinga satellite can't push any data to the master. The only option would be that the master pulls the data periodically from the satellite(s). It's possible for the master to login via SSH agent forwarding to the servers in Azure. Is this possible or is there a better solution? I'd rather not create a reverse SSH tunnel.
You might just use the icinga2 client and let the master connect to the client (ensure that the endpoint object contains host/port attributes). Once the connection is established the client will send its check results (and historical replay logs even if there).
We have a series of Amazon Web Services servers running Amazon Linux and Oracle XE, which is used by a local app. OracleXE installs and runs just fine, our app can connect to the DB, everything is great.
For one of our particular servers, we needed to shut it down and archive it. Today, I need to bring it online. This is done by setting up a new AWS instance, creating a new virtual hard drive from backup snapshot, setting up a new public IP for the server and changing DNS settings to that the old domain points to the new IP, connecting up the restored virtual drive as the main drive, and starting it up.
OracleXE doesn't want to work. Using sqlplus to connector to localhost:1521/XE produces "ORA-12514: TNS:listener does not currently know of service requested in connect descriptor".
This system was working just fine when I snapshotted it and archived it the first time, and I hadn't changed any settings since restoring it. Everything should be exactly the same, so why is OracleXE now not working?
The listener.ora and tnsnames.ora had the host defined using the server's public domain name. I tried changing that to localhost, but it's still not working.
The only things I can think of that will be different are the server's public IP address, and the "rsa2 key fingerprint" (what Putty complains about due to the SSH key being the same but it being a new AWS instance). All the advice I've seen so far is for fixing config errors for ORA-12514 when setting up a new system or after restarting, but this is a system that was working fine but has been restored from snapshot.
Most likely, your listener uses dynamic instance registration.
For that to work, your listener must be listening to the default port (1521) or your instance must use the local listener parameter that defines the address of the listener where the instance should register it's existence.
So, check your local_listener parameter, check the servers tnsnames.ora and listener.ora. Also check your clients tnsnames.ora, it should point to your new server and listener.