I'm running an IoTivity client that rediscovers available resources every 20 seconds. At the start it works fine. However, after around 2 and a half minutes the client doesn't discover the resource anymore. When I restart the client it still doesn't find it but when I restart the server the client rediscover it but for 2:30 min. Why is this happening and how can I fix it?
I'm using IoTivity 1.2.1 and I'm running the server and the client on different embedded devices.
If you mean the "classic" iotivity, not only is 1.2.1 rather old (last release was 1.4), but that project has largely been abandoned except for historical interest, in favor of iotivity-lite.
Related
I have a problem with a server that called server A:
Server A: Red Hat Enterprise Linux Server release 7.2 (Maipo)
Server B: Red Hat Enterprise Linux Server release 7.7 (Maipo)
jdk-8u231 installed on all of servers.
I have an Spring Boot application running on 2 servers.
Whenever i use Jmeter to send 100 concurrency request to application running on each servers, the application running on Server B have no problem.
But in Server A, the application will be not responding, that mean the Process (PID) still running but I can't visit actuator endpoint, cannot visit Swagger page, cannot send new request ... log file show nothing since that time.
Thread dump and heap dump have no significant difference.
Could anyone show me how to analysis that problem?
I still have no idea why the problem occur.
Well, I can only speculate here, but here some ideas that can help:
There are two possible sources of issue here Java Application and Linux (+its network policies, firewalls and so forth).
Since You don't know for sure, what happens, try working by "elimination".
Create a script that will run 100 concurrent requests. Place the script at the Server A (the problematic one) and run The script will run against "localhost" (obviously). If you see that it works, then the issue is not in Java at all. Probably some network policies or linux setup, who knows.
Place a log message in the controller of the java application and examine the log. The log should print the request number among other things, so that you'll be able to understand whether you get stuck after a well defined number of requests or its always a different number.
Check the configurations of Spring Boot application. Maybe there is a difference in the number of threads allocated to serve the request by the embedded web server that runs inside the spring boot application (assuming you're not using a reactive stack) and this number differs. In this case you won't be able to call rest endpoints, actuator, etc.
If JMX connection is available to the setup, connect via the JMX and check the MBean of Tomcat (again, assuming there is a tomcat under the hood) to check pretty much the same information as in 4.
You've mentioned thread dumps. Try to take more than one thread dump but one before you're running JMeter test, one during the running (when everything still works), one when everything is stuck.
In the thread dumps check the actual stacktraces, maybe all the threads are stuck working with Database or something and can't serve requests like I've explained in "4"
Examine GC logs, maybe GC works so hard that you can't really interact with the application.
I have developed website using SPRING framework and used C3po connection pooling also used Spring security, but when i deployed it on tomcat server
it works fine for two days but after two days it automaticaly stop responding,
I have checked memory leak if any but not found anything and also not get any log of memmory leak in log files,
Also check memory after every hours and its working fine.
Also my website users are average 1000. and session timeout is 8 hours, also given 2GB RAM for tomcat instance.
So Please help me out in this.
I have a Digital Ocean droplet (512MB RAM, 20GB SSD Disk, Ubuntu 13.10 x64) on which
a MongoDB instance and
a Tomcat 7 server
run.
On the Tomcat server, following applications are installed
Apache CXF-based application, which takes processes web service requests, interacts with the database and executes scheduled jobs,
Vaadin application,
JSF (Primefaces) application and
Psi Probe.
When I
restart Tomcat,
use the Vaadin and/or JSF application,
then for several weeks do nothing on that machine (it basically is idle during that time),
then try to open the JSF and/or Vaadin application,
I find the site unresponsive (nothing is displayed after I enter the URL in the browser).
When I restart Tomcat (sudo service tomcat7 restart), everything works again. I don't see any obvious problems in the Tomcat logs.
How can I find out,
whether the problem is on the Tomcat side (one of the applications consumes too many resources even if idle) or on the OS side (nothing happens on the machine and therefore the OS puts itself into a "hibernating" mode) and
if the problem is with Tomcat, exactly which of the application is causing it?
Please start from top to bottom.
then try to open the JSF and/or Vaadin application,
I find the site unresponsive (nothing is displayed after I enter the
URL in the browser).
Check if the service is still running before restarting sudo service tomcat7 status and/or ps -ef | grep tomcat
Check with netstat -patune | grep <portnumber, e.g. 443> if the server is listening on the configured ports
Check your httpd/Apache/Tomcat access logs if the request reaches the server and if yes, check if there are errors or timeouts related to the requests
Check if the DB connection is still possible
To force some error logs, try to change your maxIdle, maxActive and maxWait attributes of your Tomcat's Connection Pool configuration. maxWait default is -1, connections created sometimes during these weeks will wait forever.
#Patrick provided some excellent basic tests.
I notice that you only have 512 MB of RAM. With some fairly beefy software such as tomcat, plus MongoDB on top of that, your machine may simply be overloaded.
Based on that, I would propose a couple additional things to check:
sudo free
Should tell you how much memory is being used, and how much swap space you use.
sudo top
Will tell you which process is using the most memory. You may want to sort the output of top by memory (default is usually by CPU utilization).
Most importantly, check the log files in /var/log (in particular /var/log/messages). You may find indications that the kernel killed one of your processes (probably tomcat).
Using the WebSphere Integrated Solutions Console, a large (18,400 file) web application is updated by specifying a war file name and going through the update screens and finally saving the configuration. The Solutions Console web UI spins a while, then it returns, at which point the user is able to start the web application.
If the application is started after the "successful update", it fails because the files that are the web application have not been exploded out to the deployment directory yet.
Experimentation indicates that it takes on the order of 12 minutes for the files to appear!
One more bit of background that may be significant: There are 19 application servers on this one WebSphere instance. WebSphere insists that there be a lot of chatter between them, even though they don't need anything from each other. I wondered if this might be slowing things down when it comes to deployment. Or if there's some timer in the bowels of WebSphere that is just set wrong (usual disclaimers apply...I'm just showing up and finding this situation...I didn't configure this installation).
Additional Information:
This is a Network Deployment configuration, and it's all on one physical host.
* ND 6.1.0.23
Is this a standalone or a ND set up? I am guessing it is ND set up considering you have stated that they are 19 app servers. The nodes should be synchronized with the deployment manager so that the updated files are available to the respective nodes.
After you update and save the changes, try and synchronize the nodes with the dmgr (or alternatively as part of the update process, click on review and the check the box which says synchronize nodes) and this would distribute the changes to the various nodes.
The default interval, i believe is 1 minute.
12 minutes certainly sounds a lot. Is there any possibility of network being an issue here?
HTH
Manglu
This might also belong on serverfault. It's kind of a combo between server config and code (I think)
Here's my setup:
Rails 2.3.5 app running on jruby 1.3.1
Service Oriented backend over JMS with activeMQ 5.3 and mule 2.2.1
Tomcat 5.5 with opts: "-Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled"
Java jdk 1.5.0_19
Debian Etch 4.0
Running top, every time i click a link on my site, I see my java process CPU usage spike. If it's a small page, it's sometimes just 10% usage, but sometimes on a more complicated page, my CPU goes up to 44% (never above, not sure why). In this case, a request can take upwards of minutes while my server's load average steadily climbs up to 8 or greater. This is just from clicking one link that loads a few requests from some services, nothing too complicated. The java process memory hovers around 20% most of the time.
If I leave it for a bit, load average goes back down to nothing. Clicking a few more links, climbs back up.
I'm running a small amazon instance for the rails frontend and a large instance for all the services.
Now, this is obviously unacceptable. A single user can bring spike the load average to 8 and with two people using it, it maintains that load average for the duration of our using the site. I'm wondering what I can do to inspect what's going on? I'm at a complete loss as to how I can debug this. (it doesn't happen locally when I run the rails app through jruby, not inside the tomcat container)
Can someone enlighten me as to how I might inspect on my jruby app to find out how it could possibly be using up such huge resources?
Note, I noticed this a little bit before, seemingly at random, but now, after upgrading from Rails 2.2.2 to 2.3.5 I'm seeing it ALL THE TIME and it makes the site completely unusable.
Any tips on where to look are greatly appreciated. I don't even know where to start.
Make sure that there is no unexpected communication between the Tomcat and something else. I would check in the first place if:
ActiveMQ broker doesn't communicate with the other brokers in your network. By default AMQ broker start in OpenWire auto-discovery mode.
JGroups/Multicasts in general do not communicate with something in your network.
This unnecessary load may result from the processing of the messages coming from another application.