We have new MQ Managers (version 9) Multi-Instance setup which are running on Windows Servers 2016.
When we start the MQ Managers using strmqm -x we can see that on one side the managers are active and on the other they are on standby as would be expected which is great.
We then reboot the server on one side and we can see that the STANDBY instances on the other side become ACTIVE as would be expected which is also great.
After the reboot however, the managers on the rebooted server do not seem to startup on standby mode (I am guessing this is because they are not being started with the -x option?). Is it possible to have managers start automatically on server bootup with -x option?
strmqm -x should start the qmgr in standby mode, assuming there is already an active instance. So the answer here is to ensure that you add the -x switch.
Related
I want to access queue manager via mq explorer but getting an error:
Could not establish a connection to the queue manager - reason 2538. (AMQ4059)
Could not establish a connection to the queue manager - reason 2538. (AMQ4059)
Severity: 10 (Warning)
Explanation: The attempt to connect to the queue manager failed. This could be because the queue manager is incorrectly configured to allow a connection from this system, or the connection has been broken.
Response: Try the operation again. If the error persists, examine the problem determination information to see if any information has been recorded.
I followed all the instructions in https://www-01.ibm.com/support/docview.wss?uid=swg21623113 in order to allow mq explorer to be able to access mq server but still no luck.
IBM MQ Server details:
Version: 8
OS: Centos
Running in a docker container
Using port 1417 since my 1414 port is not available for another MQ server
Listener is up an running and pointing port 1417
Channel is defined as it is described in the link that I shared (I disabled all security features as it is described)
I have a sample Java App that I can put/get messages and it is working fine
MQ Explorer details:
Also running in another docker container thanks to
https://github.com/ibm-messaging/mq-container/tree/master/incubating/mq-explorer
I can telnet MQ Server from xterm so there is no issue about the connectivity
Although I disabled all security features, I also tried to create the same username on server as well as my xterm but it did not work either.
I was expecting to get an error message in my MQ Server to understand the issue but surprisingly there is no error message at all ...
Screenshot
You've stated that your queue manager(s) are running in a container and your MQ Explorer is running in another container. I've noticed you've supplied 0.0.0.0 as your hostname but the container where MQ Explorer is running has no queue managers running on it!
If you run the following command (replacing with the ID of the container running your queue managers) you should get the IP address of the container on the docker subnet. Try using that IP address in MQ Explorer instead of 0.0.0.0:
docker inspect --format "{{ .NetworkSettings.IPAddress }}" <QM container>
If your container is on a different docker network then you will need to run the following replacing with the name you gave the docker network:
docker inspect --format "{{ .NetworkSettings.Networks.<Network Name>.IPAddress }}" <QM container>
Additionally, when you created your queue manager container did you remember to expose the 1417 port you are trying to use? By default the mq-container sample only exposes the following ports: 1414, 9157 & 9443. When you ran the container you would of needed to expose the ports but supplying --publish-all --publish 1417 when you ran the container. For example:
docker run -d -e LICENSE=accept --publish-all --publish 1417 ibmcom/mq
You have attempted to connect your MQ Explorer to your queue manager using the following connection details:-
Host name or IP address: 0.0.0.0
Port number: 1417
Server-connection channel: SYSTEM.ADMIN.SVRCONN
and you have received return code MQRC_HOST_NOT_AVAILABLE(2358) which says that the network address is not reachable.
Common reasons for this error include not having a TCP.IP listener running using that port, but you have told us you have got a listener running.
The IP address you have used is the problem. Change the IP address in your MQ Explorer configuration to the actual IP address where the queue manager is running. If the MQ Explorer and Queue Manager are on the same machine (in the same container), you can use the localhost hostname or the IP address 127.0.0.1, otherwise, please use the assigned IP address for the machine. From your screenshot it appears that this might be a 192.168.* address.
You don't say what version of IBM MQ your queue manager is running under. i.e. v7.5, v8.0, v9.0 or v9.1.
Did you give yourself CHLAUTH permission to use the SYSTEM.ADMIN.SVRCONN channel? Most likely you are being blocked by the backstop rule.
Also, if you are on IBM MQ v8.0 or higher then then CONNAUTH could be blocking you.
Here are 2 good links to walk you through your issue.
https://www.ibm.com/developerworks/community/blogs/aimsupport/entry/blocked_by_chlauth_why?lang=en
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_8.0.0/com.ibm.mq.mig.doc/q001110_.htm
I am new to Spark and I am trying to run it on EC2. I follow the tutorial on spark webpage by using spark-ec2 to launch a Spark cluster. Then, I try to use spark-submit to submit the application to the cluster. The command looks like this:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://ec2-54-88-9-74.compute-1.amazonaws.com:7077 --executor-memory 2G --total-executor-cores 1 ./examples/target/scala-2.10/spark-examples_2.10-1.0.0.jar 100
However, I got the following error:
ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
Please let me know how to fix it. Thanks.
You're seeing this issue because the master node of your spark-standalone cluster cant open a TCP connection back to the drive (on your machine). The default mode of spark-submit is client which runs the driver on the machine that submitted it.
A new cluster mode was added to spark-deploy that submits the job to the master where it is then run on a client, removing the need for a direct connection. Unfortunately this mode is not supported in standalone mode.
You can vote for the JIRA issue here: https://issues.apache.org/jira/browse/SPARK-2260
Tunneling your connection via SSH is possible but latency would be a big issue since the driver would be running locally on your machine.
I'm curious if you still having this issue ... But in case anyone is asking here is a brief answer. As clarified by jhappoldt, the master node of your spark-standalone cluster cant open a TCP connection back to the drive (on your local machine). Two workarounds are possible, tested and succeeded.
(1) From EC2 Management Console, create a new security group and add rules to enable TCP back and forth from your PC (public IP). (what I did was adding TCP rules inbound and outbound) ... Then add this security group to your master instance. (right click --> Networking --> Change security groups). Note: add it and don't remove the already established security groups.
This solution work well, but in your specific scenario, deploying your application from local machine to EC2 cluster, you will face further problems (resource related) so the next option is the best one
(2) Having your .jar file (or .egg) copy it to the master node using scp. You can check this link http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html for information about how to do that; and deploy your application from the master node. Note: spark is already pre-insalled so you will do nothing but write the same exact command you write on your local machine from ~/spark/bin. This shall work perfect.
Are you executing the command on your local machine, or on the created EC2 node? If you're doing it locally, make sure port 7077 is open in the security settings, as its closed to the outside by default.
I have a websphere application server environment running on a remote linux machine, with multiple appserver instances running. I want to use a tool like jconsole or visualvm on my local desktop to monitor the heap size of the individual appservers, but have no idea how to do it. The solutions found after googling do not tell how to enable multiple connections to individual appserver instances. Any help please ?
You need to enable JMX connection for every app server instance. Once you did it, add remote JMX connection to those servers in VisualVM.
I'm a websphere newbie and would like to change the port number. I read to manual about ports but it didn't say anything about port number 8080 and it says that glassfish is running on 8080. Am I running websphere or glassfish? When I installed websphere a browser started a window saying I'm running glassfish which I thought was another app server. Did I install both?
If you want to change glassfish port on the other hand go to http://localhost:4848 (default port for admin console) and on the menu on the left navigate to:
Configurations > server-config > Network Config > Network Listeners > http-listener-[1|2]
and there you can change the port.
WebSphere Application Server doesn't use Port 8080 by default. If port 8080 indicates that Glassfish is running, then you must have installed and started Glassfish. These are two completely different application servers / products.
The WAS console runs on port 9060 by default, so you may be able to see if you have also started WAS by going to http://localhost:9060/ibm/console. Also, you can check your running processes (via Windows Task Manager, for example) and installed programs (via Windows Programs and Features, or a simple Explorer search) to determine what might be running/installed.
If, after installing/starting WAS, you still want to change your port numbers, you can do this by going to "Servers > Server Types > WebSphere application servers > [appserver] > Ports" (for individual application servers; also these instructions are for WAS 7.0, but should be similar if you're using a different version). If you are using WebSphere Application Server ND, there are actually a number of different processes/application servers running at any given time (deployment manager, nodeagents, individual app servers) that open up ports as well. I wouldn't recommend changing any of the default port values unless you are sure that you have a conflict though.
I have create a cluster in websphere and deployed a defaultapplication coming with ibm package to cluster.
Now,i want to test this application to check weather it installed correctly on each member of the cluster or not.
How can i check that on which port it is running i have tried with default port but i fail not get that.
i have installed websphere in machine that have Windows OS.
Thanks in advanced.
Edited:-
i want to ope application that deployed in the cluster, how can i open it??
any idea about this..??
Check the WC_defaulthost port for each of the cluster members. You can find these ports in the WebSphere Integrated Solutions Console in WAS 6.1 at Servers > Application Servers > [serverName] > Communications > Ports.