I'm trying to combine cx_oracle, sqlalchemy and gevent using SQLAlchemy / gevent / cx_Oracle pool size stays at 1 and https://github.com/oracle/python-cx_Oracle/issues/126
When I allow cx_oracle to run in parallel I can create many connections. Acturaly I can create so many that in the end I kill my connection "KPEDBG_HDL_PUSH_FCPTRMAX" witch I guess is a throttling expection. But does anybody know what this error is indicating?
You have data structures in your Database which is protected. If you have concurrent accesses by multiple threads in this structure this failure will be raised.
Related
I am using spring boot application which run through bat file.This i am using for many background services which interact with database and create txt files etc. All methods are annotated with #Scheduled(fixedDelayString = "${fixedDelay}") here fixedDelay=2000.
As many #Scheduled annotation that many thread i have configured through application.properties file. There is one method which basically call three mssql database procedure and i have to wait for all three procedure response and then proceed further. For this i have used Executors.newFixedThreadPool(3) and wait for response by future.get() this is a long running process that might take 2 or 3 hr or even more. Now i am getting Outofmemory GC overhead limit exceeded.First i try to increase the heap size but still this issue comes. And one i see the heapdump, i found outofmemory comes in this Method execution.Is there some limitation that we can not call any sql statement that will take more then 3 hr or more through thread?
I took the heapdump but from that i am not able to get the root cause. Because i am just calling the procedure in this.Not doing any other operation.Please help in this.Heap dump image is attached
Thanks
So right now I have a single thread to handle all the requests for the database. Let's say I have 400 requests per second for logins / logouts / other stuff, and 400 requests per second which are only related to items (move them, update them, remove them, etc).
Obviously, the problem is that If I want to load an item from the database, but the database is currently processing a login request, then there's gonna be a delay. And I want it to be instant, that's why I wanted to create another thread, exclusive to process item requests, and the other thread to process logins/logouts, etc.
Microsoft says this:
1: Have multiple statement handles on a single connection handle, with a single thread for each statement handle.
2: Have multiple connection handles, with a single statement handle and single thread for each connection handle.
What are exactly the differences on both approaches? I obviously need to fetch data and insert/update in both threads at the same time.
Will this 2 threads vs 1 approach speed up things?
Both threads will work exclusive in different SQL tables (ie the thread for the items will only use ITEMS_TABLE, it will never use the LOGIN_TABLE and vice-versa)
Currently I'm using the following functions (C++):
SQLSetEnvAttr with SQL_OV_ODBC3
SQLConnect
SQLAllocHandle
SQLBindParameter
SQLExecDirect
Answering your questions:
Q1: What are exactly the differences on both approaches?
Answer:
The first approach shares the same connection handle across multiple threads. So basically you first connect, then start your threads and each thread will create its own statement handle.
The second approach uses different connection handles for different threads. This means you create your threads and each thread starts its own connection and create its own statement handle(s)
I'd avoid the first approach (sharing the connection handle between multiple threads) because it has several restrictions. For example, suppose one of your threads want to switch AUTO-COMMIT ON or OFF. Since AUTOCOMMIT is a connection attribute (and all threads share the same connection handle) a change of this setting will affect ALL other threads.
Q2: Will this 2 threads vs 1 approach speed up things?
Answer:
don't think you will notice any difference.
In both cases sharing the same Environment handle between multiple threads should be ok.
Is there any way to achieve high tps making minimal connections using LoadRunner.
I am using Java protocol to test MQ.
Current scenario could achieve 30 TPS putting load of 15 Vusers.
Is there any way to use 2,3 Vusers and achieve 30 TPS?
My scenario looks like this,
init()-- Make connection to Qmgr
Action()-- sending message and getting the response
End()--- closing the connection.
So you're saying currently for each virtual user you can only achieve 2TPS.
If you have more than one iteration defined in your run time settings, then the 'Action' should be looping and reusing the current connection. If you're already doing this then that is as fast as you can go with a single thread.
Ensure the script is correctly re-using the connection within Action().
Otherwise the only way to speed things up is to optimise the code of the script.
Ensure that the messages aren't consumed too fast, I've found that trying to read off an empty IBM MQ can cause vusers to stall.
We are running a simple multi-threaded java application which uses Berkeley-DB databases for its storage. There is about 500 threads and each thread has its own Berkeley-DB database - and each database is about 100K of value-key pairs. All databases are transactional and each transaction has maximum of about 1000 operations. No long running transactions.
The problem is that, occasionally, recovery of Berkeley-DB takes very very long time when restart our application. During recovery (opening the environment) we see that java process is reading from disk at rate of ~100MB/s. No writes - just reading.
Our setup is like this:
je.env.runCheckpointer=true
je.env.runCleaner=true
je.checkpointer.highPriority=true
je.cleaner.threads=256
je.cleaner.maxBatchFiles=10
je.log.checksumRead=false
je.lock.nLockTables=353
je.maxMemory=16106127360
je.log.nDataDirectories=256
We also tried running checkpoint manually every 15 minutes (assuming that maybe checkpointer stops or something). We also set setMinimizeRecoveryTime(true). But no help.
We assume that maybe the problem is some java or Berkeley DB configuration.
Is there a way to ensure faster recovery time while sacrificing speed of puts into database?
I have a process which uses the concurrent-ruby gem to handle a large number of API calls concurrently using Concurrent::Future.execute, and, after some time, it dies:
ERROR -- : can't create Thread (11) (ThreadError)
/current/vendor/bundler_gems/ruby/2.0.0/bundler/gems/concurrent-ruby-cba3702c4e1e/lib/concurrent/executor/ruby_thread_pool_executor.rb:280:in `initialize'
Is there a simple way I can tell Concurrent to limit the number of threads it spawns, given I have no way of knowing in advance just how many API calls it's going to need to make?
Or is this something I need to code for explicitly in my app?
I am using Ruby 2.0.0 (alas don't currently have the option to change that)
After some reading and some trial and error I have worked out the following solution. Posting here in case it helps others.
You control the way Concurrent uses threads by specifying a RubyThreadPoolExecutor1
So, in my case the code looks like:
threadPool = Concurrent::ThreadPoolExecutor.new(
min_threads: [2, Concurrent.processor_count].min,
max_threads: [2, Concurrent.processor_count].max,
max_queue: [2, Concurrent.processor_count].max * 5,
overflow_policy: :caller_runs
)
result_things = massive_list_of_things.map do |thing|
(Concurrent::Future.new executor: threadPool do
expensive_api_call using: thing
end).execute
end
So on my laptop I have 4 processors so this way it will use between 2 and 4 threads and allow up to 20 threads in the queue before forcing the execution to use the calling thread. As threads free up the Concurrency library will reallocate them.
Choosing the right multiplier for the max_queue value looks like being a matter of trial and error however; but 5 is a reasonable guess.
1 The actual docs describe a different way to do this but the actual code disagrees with the docs, so the code I have presented here is based on what actually works.
The typical answer to this is to create a Thread pool.
Create a finite number of threads, have a way of recording which are active and which aren't. When a thread finishes an API call, mark it as inactive, so the next call can be handled by it.
The gem you're using already has thread pools.