On an Linux machine (Machine_A), we currently have a Deployment Manager (commerce profile, DMGR profile).
On another Machine - Machine_B (commerce profile) we are creating a managed WAS node.
We need to add this node in our Deployment's Manager Cell on Machine_A. I have federated node for commerce profile in machine_A but when I am federating node with commerce profile in Machine_B then error occurred.
Error:-
ADMU0010E: Error obtaining repository client com.ibm.websphere.management.exception.AdminException: ADMU0038E: The Deployment Manager's IP address resolves as 127.0.0.1, but the Deployment Manager is not on the local machine. The Deployment Manager's host name configuration or DNS is configured incorrectly.
You have problems in your hostname configurations. Check the /etc/hosts files on both machines and make sure there are hostnames (not localhost) entries in both mapped to the correct IP addresses. If you created DMGR profile using localhost, thats incorrect, and I'd suggest to recreate profiles or you will have to run scripts that changes the host names in profiles.
Related
I have standalone secured NiFi 1.12.1 in Docker running all fine. I am sucessfully using Site-To-Site remote processors, Site-To-Site forwarding of Nifi bulletins, calling NiFi API for self-monitoring and such things. I log in through certificate. So far all fine.
Problem crops up when I try to use NiFi Registry. I have access to two instances: secure and insecure.
No matter what exact format I specify (FQDN, just a name, with /nifi-registry or without), when I try to access (e.g. though importing a process group) the either NiFi Registry from NiFi, it fails with o.a.n.w.a.config.NiFiCoreExceptionMapper org.apache.nifi.web.NiFiCoreException: Unable to obtain listing of buckets: java.net.ConnectException: Connection refused (Connection refused). Returning Conflict response.. In logs is just this message with enormous stack-trace and nothing more.
I checked all certificates and they seem OK (certification path, certificate is for clientAuth as well as for serverAuth). I even use them to log into NiFi myself...
What surprises me the most is the fact, that it works for things like Site-To-Site protocols, API calls and such, but not for NiFi Registry.
Please don´t you know what might be a problem? Or any ideas what to check?
TL; DR:
Use IP addresses or edit /etc/hosts. Problem is in translation of hostname to IP address.
When I attempted to access the API of NiFi Registry directly from NiFi through InvokeHTTP, I noticed an important thing - nothing in different container responded to me (failed to connect to target):
#Safe NiFi - the one I am troubleshooting
https://<my FQDN>:8443/nifi-api/flow/registries
#Safe NiFi Registry (another container) - the one I am trying connect to
https://<my FQDN>:18443/nifi-registry-api/buckets
#Unsafe NiFi (another container) - just for test
http://<my FQDN>:28080/nifi-api/flow/registries
#Unsafe NiFi Registry (another container) - just for test
http://<my FQDN>:38080/nifi-registry-api/buckets
Then it dawned on me: To solve problem with Site-To-Site connections (discrepancy in name of container vs HTTPS certificate issued for hosting machine) I gave the container the same name as the hosting Docker machine. To verify I used IP addresses instead of FQDNs and it worked. Also checking /etc/hosts confirmed this - the FQDN pointed to IP address of container instead of Docker host.
Thus, one given FQDN was in container resolved as localhost and as Docker host everywhere else. And since on localhost on the NiFi Registry port(s) nothing listen ...
So as a solution either mangle the /etc/hosts to remove the offending line, or use IP addresses to force traversing through the Docker host.
I have a kubernetes cluster running on GKE and a Jenkins server running on a GCP instance.
I am using the Kubernetes plugin to dynamically create pods on the kubernetes cluster. I created a pipeline(Declarative syntax) for the same.
So I am aware that the Jenkins slave agents communicates with the Jenkins master on port 50000.
A snip of the configuration
But for some reason when I viewed the logs for the JNLP container creates by Jenkins, I received an exception - tcpSlaveAgentListener not found.
A snip of the container log
According to the above image, I assume the tunneling is unsuccessful as it is trying to connect to http://34.90.46.204:8080/tcpSlaveAgentListener/ whereas it should connect to http://34.90.46.204:50000/tcpSlaveAgentListener/.
It was a lazy question for me to ask, but I solved the issue.
In the Manage Jenkins-> Configure Global Security settings:
For the option on setting a port for TCP inbound agents: unselect the disable option which is selected by default and then provide a port for the inbound agents to interact on (50000).
A snip of the configuration
Jenkins uses a TCP port to communicate with agents connected inbound. If you're going to use inbound agents, you can allow the system to randomly select a port at launch (this avoids interfering with other programs, including other Jenkins instances). As it's hard for firewalls to secure a random port, you can instead specify a fixed port number and configure your firewall accordingly.
Hope this helps someone.
I have installed WebSphere Application Server v7 Stand-alone Edition. I have also created three application server profiles and one administrative agent profile. The hostname parameter was set to 'hostname' when creating these profiles. How can I update this parameter to the actual hostname of the machine?
You can use the wsadmin tool to change the hostname. For instructions, see this link to the IBM Documentation for WAS 7.
You will need to repeat the process for each of the created profiles. Once the change is done, restart the servers for the changes to be correctly reflected.
Stop WAS
Edit the file serverindex.html located under:
#IBM_home$/profiles/$Server_name$/config/cells/$cell_name$/nodes/$node_name$
Replace the old hostname with new hostname under all hostName and host definitions
Start WAS
I am very new to WebSphere Application Server. I need to know the difference between Virtual host and Context root, and the benefits of that context root and virtual host. I have tried to google it also, But I couldn't get the proper information. Hope Stackoverflow will help me to learn.
virtual host - A configuration that enables one host to resemble
multiple logical hosts. Each virtual host has a logical name and a
list of one or more DNS aliases by which it is known.
context root - The web application root, which is the top-level
directory of an application when it is deployed to a web server.
So a virtual host allow you to have different domain names and even certificates configured for the same App server: www.mydomain.com/AppName and www.myNewDomain.com/AppName
Context root is just the top level directory of an application: www.mydomain.com/App1 and www.mydomain.com/App2
I have set up a Server and a BuildAgent on Amazon Ec2. I have configured the build agent to sit in the /home/ec2-user/build-agent directory (will this cause problems due to permissions?)
I then configured the build agent to point to the server through the public dns and port 8111. I have then started the build agent.
I used the Cloud Tab on the TC Web interface to setup the Build Agent on the server. I can see on the build agent host machine that a connection with the server has been established. Yet in the web interface I can see no build agents in either the unauthorized tab or the compatible tabs.
In short I cannot connect a build agent to the server?
Any help or ideas on this one would be greatly appreciated.
many thanks
Dermot