Websphere FTE agent going to Unreachable state - ibm-mq

Facing issues with IBM websphere FTE agent.This agent is deployed in UNIX system.The usual load on this agent used to be around 300 files per day.Now the load has increased significantly from 300 to 2500/day.Because of this the agent is going down again and again.
Tried fixing the issue by creating multiple monitors polling the same source folder.But the problem still persists,since multiple monitors polls for the same files and throws file doesnot exist exception.
Please help what are the ways i can fix this issue.

I believe your agent is running out of memory, consider increasing memory or control number of simultaneous transfer.

Related

GC Overhead limit exceeded in WSO2 EI 6.6

In WSO2 EI 6.6, proxy stopped working abruptly. upon analyzing observed an error in the wso2 carbon log "GC Overhead limit exceeded", after this error nothing happening in the EI.
Proxy logic is to get the data from Sql server table and form an xml and send it to an external API. Proxy runs every 5 mins interval and in every interval maximum of 5 records will be pushed to an API.
After restarting the wso2 carbon services, proxy are started working. currently we are restarting the services every 3 days to avoid this issue.
Need to know how to identify the potential issue and resolve this.
This mean the JVM has run out of allocated memory. There can be many reasons for this. For example, if you haven't allocated enough memory to the JVM you can easily run out of memory. If that's not the case you need to analyze a memory dump and see what's occupying the memory causing it to fill up.
Generally, when you see the mentioned error the JVM automatically creates a heap dump(heap-dump.hprof) in the <EI_HOME>/repository/logs directory. You can try analyzing the dump to find the root cause. If the server doesn't generate a memory dump, manually take a memory dump when it's occupied than the expected level and analyze it.

IBM MQMFT recommended max monitor per Agent

Is there a recommended max number of monitors per agent with IBM MQ MFT? We are on version 9.2.0.2 and are experiencing slow downs when connecting creating/editing monitors on a specific Agent, it has about 100 monitors on it. This agent "heavy hitter" of our 11 agents. Most feed into is and it feeds out to all of them. Just looking for any recommendation or if we should configure an additional agent on the same server. All agents and monitors (150 or so total) are on the same QMGR on an MQ Appliance.
Every monitor basically runs on it's own thread, hence consume resources. Also remeber, the transfers initiated by each monitor will also run few threads and consume resources. So you may be running lot of concurrent transfers thus agent is possibly getting loaded.
What JVM heap size has been allocated to the agent? Have your tried increasing the JVM heap size? If you see slow downs, it's worth looking at spreading the monitors among multiple agents and as a next step try adding an extra queue manager as well.
Here is a report that describes the results of tests done on the number of agents connecting to a queue manager.

Marklogic latency : Document not found

I am working on a clustered marklogic environment where we have 10 Nodes. All nodes are shared E&D Nodes.
Problem that we are facing:
When a page is written in marklogic its takes some time (upto 3 secs) for all the nodes in the cluster to get updated & its during this time if I then do a read operation to fetch the previously written page, its not found.
Has anyone experienced this latency issue? and looked at eliminating it then please let me know.
Thanks
It's normal for a new document to only appear after the database transaction commits. But it is not normal for a commit to take 3-sec.
Which version of MarkLogic Server?
Which OS and version?
Can you describe the hardware configuration?
How large are these documents? All other things equal, update time should be proportional to document size.
Can you reproduce this with a standalone host? That should eliminate cluster-related network latency from the transaction, which might tell you something. Possibly your cluster network has problems, or possibly one or more of the hosts has problems.
If you can reproduce the problem with a standalone host, use system monitoring to see what that host is doing at the time. On linux I favor something like iostat -Mxz 5 and top, but other tools can also help. The problem could be disk I/O - though it would have to be really slow to result in 3-sec commits. Or it might be that your servers are low on RAM, so they are paging during the commit phase.
If you can't reproduce it with a standalone host, then I think you'll have to run similar system monitoring on all the hosts in the cluster. That's harder, but for 10 hosts it is just barely manageable.

How to prevent WebSphere from starting before files from an application update have been unpacked

Using the WebSphere Integrated Solutions Console, a large (18,400 file) web application is updated by specifying a war file name and going through the update screens and finally saving the configuration. The Solutions Console web UI spins a while, then it returns, at which point the user is able to start the web application.
If the application is started after the "successful update", it fails because the files that are the web application have not been exploded out to the deployment directory yet.
Experimentation indicates that it takes on the order of 12 minutes for the files to appear!
One more bit of background that may be significant: There are 19 application servers on this one WebSphere instance. WebSphere insists that there be a lot of chatter between them, even though they don't need anything from each other. I wondered if this might be slowing things down when it comes to deployment. Or if there's some timer in the bowels of WebSphere that is just set wrong (usual disclaimers apply...I'm just showing up and finding this situation...I didn't configure this installation).
Additional Information:
This is a Network Deployment configuration, and it's all on one physical host.
* ND 6.1.0.23
Is this a standalone or a ND set up? I am guessing it is ND set up considering you have stated that they are 19 app servers. The nodes should be synchronized with the deployment manager so that the updated files are available to the respective nodes.
After you update and save the changes, try and synchronize the nodes with the dmgr (or alternatively as part of the update process, click on review and the check the box which says synchronize nodes) and this would distribute the changes to the various nodes.
The default interval, i believe is 1 minute.
12 minutes certainly sounds a lot. Is there any possibility of network being an issue here?
HTH
Manglu

Log file writing extremely delayed in WebSphere App Server

I am experiencing an issue with delayed writes to the application logs for a Java EE web application running in IBM WebSphere v. 7.x. Logging statements taking up to an hour to appear in the application logs.
The problem doesn't appear related to heavy loads; WAS is responding to page requests almost instantly, and I am testing against a box that isn't used for performance testing, and on a holiday no less -- there is very little activitiy on the server.
My guess would be that the thread associated with logging has been configured with very low priority, but I cant figure out where that would be configured via the admin console or the configuration files.
Has anyone else experienced this sort of issue with WebSphere?
it's possible you don't even enough available threads in the thread pool. Its consistant with the page requests being fast, as they are controlled by the WebContainer threads.
Try increasing it:
Servers > Application Servers > Thread pools > ...
Not sure exactly which one to increase its max value. In worst case, increase'em all. Increase it heavily, so to be sure.
Other options:
make sure you enough disk space / try to connect with jConsole to inquire.

Resources