The term stuck thread is commonly used in Oracle WebLogic Server.
What is a stuck thread and why is thread diagnosed as a stuck thread?
When does stuck thread occur?
What kind of impact does it happen to
the running application?
What are the prevention mechanisms?
Stuck Threads are threads that are blocked, and can't return to the threadpool for a certain amout of time. By Default, the WLS comes with 600 secs. If some thread doesn't return in 600 secs, It gets a flag 'stuck thread'.
– > Stuck Threads are only flags, there to warn you that this thread is taking too long.
Searching a bit on google I found this site: http://www.munzandmore.com/2012/ora/weblogic-stuck-threads-howto
It explains what are stuck threads, as well some methods to work around them.
Related
This was Ultimate Thread Group in Jmeter.
I used Blazemeter extension to create .jmx file then replace with ultimate thread group
Although i put 30 second shut down time.
When it reach the maximum thread and Hold Load was finished it seem doesnt shut down thread (Very slow 1 or 2 thread every 20 seconds). Is there anything that i could do to solve this problems?
Most probably your application gets overloaded and starts responding slowly.
JMeter politely "asks" threads to stop when they will be free and threads in their turn kindly wait for the system under test to respond (in order to avoid potential Socket Closed errors). Once it responds they're immediately stopped.
So the solution is to set reasonable timeouts using i.e. HTTP Request Defaults
as JMeter doesn't have any pre-defined timeout and if server won't respond or cut the connection from its end - JMeter will wait for the response forever.
Also consider removing Listeners from your test plan, they don't add any value and just consume resources
I wanted to know how I can I make the io do something like a thread.join() wait for all tasks to finish.
io_type->post( strand->wrap(boost::bind &somemethod,ptr,parameter)));
In the above code if 4 threads were initially launched this would give work to the next available thread. However I want to know how I could actually wait for all the threads to finish work. Like we do with threads.join().
If this really needs to be done, then you could setup a mutex or critical section to stop your io handlers from processing messages off of the socket. This would need to be activated from another thread. But, more importantly...
Perhaps you should rethink your design. The problem with having the io wait for other threads to finish is that the io would then be unresponsive. In general, not a good idea. I suspect that most developers working on networking software would not even consider it. If you are receiving messages that are not ready to be processed yet due to other processing that is going on, then consider storing them in a queue and process them on a different thread when the other threads have signaled that they have completed their work.
I profiled an application. Basically every thread reads an XML file from a network share, deserializes an object, logs to local files, asynchronously logs to db and calls a web service.
Amount of Threads is about 14 on a 24 core machine.
Redgate profiler shows me the multithreaded application is waiting for synchronisation 70% of the time. Is this an alarming signal or to be expected? Further if you can give advice how to approach analysing such a profiler log please share your knowledge.
Waiting for synchronization just means that a thread is suspended while waiting for another thread to complete an operation. Whether or not you should be concerned about this depends on how long you expect the operation on that thread to take to reach completion.
If the stack indicates a read/write, then it may just mean the disk is slow, for example. Maybe you can minimize that by changing your code; maybe it's just a flaky network or disk drive.
I have an application that makes several slow http calls on certain inbound API requests and I'd like those to run in parallel because there are several and they are slow.
For a thread pool, I've previously used http://burgestrand.se/articles/quick-and-simple-ruby-thread-pool.html.
Are there any architecturally sound solutions for running this in parallel, with or without a thread pool?
Edit
My apologies, I was watching a movie while typing this up and wrote "serial" in the places where I have italicized "parallel". Thanks to #Catnapper for the catch. How embarassing
For good leads try Sidekiq:
http://mperham.github.com/sidekiq/
And Celluloid:
http://www.unlimitednovelty.com/2011/05/introducing-celluloid-concurrent-object.html
I'm running an eventmachine process on heroku, and it seems to be hitting their memory limit of 512MB after an hour or so. I start seeing messages like this:
Error R14 (Memory quota exceeded)
Process running mem=531M(103.8%)
I'm running a lot of events through the reactor, so I'm thinking maybe the reactor is getting backed up (I'm imagining it as a big queue)? But there could be some other reason, I'm still fairly new to eventmachine.
Are there any good ways to profile eventmachine and and get some stats on it. As a simple example, I was hoping to see how many events were scheduled in the queue to see if it was getting backed up and keeping those all in memory. But if anyone has other suggestions I'd really appreciate it.
Thanks!
I use eventmachine extensively and never ran into any memory leak inside the reactor so your bet is that the ruby code is but without knowing more about your application it is hard to give you a real answer.
The only queue I can think of right now is the thread pool, each time you use the defer method the block is either given to a free thread or queued waiting for a free thread, I suppose if all your threads are blocking waiting for something the queue could grow and use all the memory available.
The leak turned out to be in Mongoid's identity_map (nothing to do with EventMachine). Setting Mongoid.identity_map_enabled = false at the beginning of the event machine process resolved it.