I administer several WebSphere 6.1 servers running the same application in a load balancing configuration. For one of the servers, the WebSphere System.out file is getting filled with these sorts of messages:
[6/5/14 20:20:35:602 EDT] 0000000f SessionContex E Miscellaneous
data: Attribute "rotatorFiles" is declared to be serializable but is
found to generate exception "java.io.NotSerializableException" with
message "com.company.storefront.vo.ImageRotatorItemVO". Fix the
application so that the attribute "rotatorFiles" is correctly
serializable at runtime.
The same code is not generating these messages in the other WebSphere servers log files. I suspect there is some configuration setting that is causing these messages to be logged on one server but not the others. Does anyone out there know what setting that may be?
At least two come to my mind:
you may have session replication enabled on that server, check in Application servers > server1 > Session management > Distributed environment settings
you may have PMI counter that monitors session size (Servlet Session Manager.SessionObjectSize) enabled, check in Application servers > server1 > Performance Monitoring Infrastructure (PMI)
The paths in the console are from v8, so they might be a bit different in v6.1, but you should get the idea.
Related
I'm connecting a Web server to a backend using gRPC services.
In the case of backend being set up with -Dspring.profiles.active=default, the gRPC api connects but using -Dspring.profiles.active=prod the connection times out.
In the code, there is no setups for neither value so I'm left to presume they are profile that come "out of the box" with Spring!?
Thats the hypothesis at least cause there doesn't seem to be any other setup and deployment differences that might be causing this connection errors.
Thanks for any pointers!
The spring profile determine which properties file needs to be picked up while running the application.
-Dspring.profiles.active=default //takes the application-default.properties file
-Dspring.profiles.active= prod //takes the application-prod.properties file
Whenever I start the websphere from my IBM Rational Application developer, it is creating dump inside
profiles\AppSrv1
and
C:\Server\profiles\AppSrv1\bin\
and then I have to stop the process under task manager, delete dump files.
Please make a note that I've not logged into the application, only the server start up is creating heap dump
Why is it happening? Any ideas will be helpful for me.
It seems that the automated heap dump is enabled. Check that the checkbox value of Enable automatic heap dump collection under Servers -> Performance -> Runtime in the console.
I am looking for a command to change the message broker message flow instance in the run time. I know it is quite easy with MB explorer. But I am more interested towards the server side mqsi command. Ours is a AIX env with message broker 8 installed.
The number of instances a message flow has on the execution group is configured in the BAR file, before deployment.
If you want to change the number of additional instances you will need to redeploy your flow.
You can use the mqsiapplybaroverride command to change the configuration of the flow in the BAR file, and the mqsideploy command to redeploy the BAR.
As of IIB v9 you can control the number of instances dynamically at runtime by assigning a workload management policy.
See the description here:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/bn34262_.htm
Once you have assigned a policy you can change it using the mqsichangepolicy command specifying an xml policy document that has a different number of instances.
Alternatively you can use the web ui to change it directly on the running broker.
By default, log, error, and trace information for all processes and applications on a process server is written to the SystemOut.log, But our requirement is to only log Request and Responses . Is there any setup in admin console to do this?
Thanks in advance.....
When WebSphere starts, it designates the SystemOut.log file as the file into which all System.out prints will go to. Therefore, whenever any code is issuing, for example, System.out.println, the output will end up going into SystemOut.log - and that is true for both your application code and WebSphere's internal code.
To achieve the effect that you're looking for, consider using some logging framework, such as Log4J, SLF4J or Java's standard logging API.
Here is the situation:
I'm on websphere Network Deployment v8.0.0.3
I've an application which use 2 queuese
One Queue is for internal application usage (Publisher and Consumer are inside the same application) the other queue is used by other modules deployed on Other application server inside the cell.
So I have configured the 1st queue at Cell Scope level and the second queue at Cluster scope level.
Everything was working until I added a Name Space Binding.
After that every jms jndi object Cluster Scoped are not present anymore inside a dumpNameSpace.sh output.
Seams like the resolution of the scopes are modified by the presence of a Name Space Binding.
Which indeed is really odd but I got the same behaviour on 2 different installations of WAS.
Thanks for anyone which knows this.
Update
This is the diff between the jndi dump that works and the one which not.
--- clsdumpOk 2012-08-07 11:49:43.000000000 +0200
+++ clsdumpKo2 2012-08-07 11:49:59.000000000 +0200
## -454,28 +454,12 ##
(top)/clusters/TestCluster/jdbc/modulobase
(top)/clusters/TestCluster/jms
(top)/clusters/TestCluster/jms/as
-(top)/clusters/TestCluster/jms/as/BatchRequest
-(top)/clusters/TestCluster/jms/as/BatchResponse
(top)/clusters/TestCluster/jms/as/ciccio
-(top)/clusters/TestCluster/jms/as/FSCleaner
(top)/clusters/TestCluster/jms/as/License
(top)/clusters/TestCluster/jms/as/Mailer
-(top)/clusters/TestCluster/jms/as/Plans
-(top)/clusters/TestCluster/jms/as/RiaResponse
-(top)/clusters/TestCluster/jms/ConnectionFactory
-(top)/clusters/TestCluster/jms/pac
-(top)/clusters/TestCluster/jms/pac/as
-(top)/clusters/TestCluster/jms/pac/as/Events
(top)/clusters/TestCluster/jms/queue
-(top)/clusters/TestCluster/jms/queue/batch-request
-(top)/clusters/TestCluster/jms/queue/batch-response
-(top)/clusters/TestCluster/jms/QueueConnectionFactory
-(top)/clusters/TestCluster/jms/queue/events
-(top)/clusters/TestCluster/jms/queue/filesystem-cleaner
(top)/clusters/TestCluster/jms/queue/license
(top)/clusters/TestCluster/jms/queue/mailer
-(top)/clusters/TestCluster/jms/queue/plans
-(top)/clusters/TestCluster/jms/TopicConnectionFactory
(top)/clusters/TestCluster/jta
(top)/clusters/TestCluster/jta/usertransaction
(top)/clusters/TestCluster/SecurityServer
## -495,8 +479,10 ##
(top)/clusters/TestCluster/url/casCfgFile
(top)/clusters/TestCluster/UserRegistry
(top)/clusters/TestCluster/wb25
-(top)/clusters/TestCluster/wb25/topic
-(top)/clusters/TestCluster/wb25/topic/ria-response
+(top)/clusters/TestCluster/wb25/conf
+(top)/clusters/TestCluster/wb25/conf/locking
+(top)/clusters/TestCluster/wb25/conf/locking/lockingEnabled
+(top)/clusters/TestCluster/wb25/conf/rootFolder
(top)/clusters/TestCluster/wm
(top)/clusters/TestCluster/wm/ard
(top)/clusters/TestCluster/wm/default
as you can see once the
+(top)/clusters/TestCluster/wb25/conf/locking/lockingEnabled
is added
all the rest is removed.
It's really weird.
The problem in WebSphere for Environment -> Naming -> Namespace Bindings
is the following:
If you set some JNDI nodes with the naming:
url/someVariable
or generically
something/someVar
and than you use the same "something" for some other object like:
URL
or
JDBC
or
JMS
The Name space binding put "something" in read-only mode and when WebSphere try to configure other resources will fail.
You cannot spot this during configuration because only at the first reboot of the Application Server you will get this.
So be carefull when chose names inside jndi.