weblogic questions - session

I have a couple of questions
1) How can we define in weblogic configuration how many concurrent users are allowed or can be allowed at a time to a particular application?
2) how can we tell how may threads are being used in a weblogic at a time?
3) How many max jdbc connections should I set so that users are not blocked due to all connections used up. How to keep a balance between number of concurrent user/threads allowed to jdbc connections max?
Thanks

It is different in each use case scenario.
But usually WLS 1 instance can cover 50~100 active user per instance.
The instance has 2 CPU and 1~1.5GB heap.
This document will be useful to your question:
"Planning Number Of Instance And Thread In Web Application Server"

1) You can user Work Managers to do this for managing requests. However, restricting the number of concurrent users will vary application to application. If it is a web app, use the work managers with a max constraint equal to the number of users you want to restrict it to. However, be sure you figure out how to handle overflow - what will you do when you get 100 requests but have a 5-user restriction? Is this synchronous or asynchronous processing?
2) Ideally you would want a 1:1 ratio of threads to connections in the pool. This guarantees that no thread (User Request) is waiting for a connection. I would suggest trying this. You can monitor the JDBC connection pools using the WebLogic console and adding fields to the columns under the 'Monitoring' tab for the connection. If you have a high number of waiters, and/or a high wait time then you would want to increase the number of connections in the pool. You could start with a 1:0.75 ratio of threads:connections, do performance/load testing and adjust based on your findings. It really depends on how well you manage the connections. Do you release the connection immediately after you get the data from the database, or do you proceed with application logic and release the connection at the end of the method/logic? If you hold the connection for a long time you will likely need closer to a 1:1 ratio.

1) If to each user you assign a session, then you can control the max number of sessions in your webapp weblogic descriptor, for example adding the following constraint :
<session-descriptor> <max-in-memory-sessions>12</max-in-memory-sessions> </session-descriptor>
It's more effective (if you mean 1 user = 1session) than limiting the number of requests by work managers.
Another way, when you can't predict the size of sessions and the number of users, is to adjust memory overloading parameters and set :
weblogic.management.configuration.WebAppContainerMBean.OverloadProtectionEnabled.
More info here :
http://download.oracle.com/docs/cd/E12840_01/wls/docs103/webapp/sessions.html#wp150466
2) Capacity of threads is managed by WebLogic through work managers. By default, just one exists : default with unllimited number of threads (!!!).
3) Usually, adapting the number of JDBC connections to the number of threads is the more effective.
The following page could surely be of great interest :
http://download.oracle.com/docs/cd/E11035_01/wls100/config_wls/overload.html

As far as i know you have to control these kind of things in
weblogic-xml-jar.xml
or
weblogic.xml
if you look for weblogic-xml-jar.xml commands you can find your desire .

Related

How to use Pomelo.EntityFrameworkCore.MySql provider for ef core 3 in async mode properly?

We are building an asp.net core 3 application which uses ef core 3.0 with Pomelo.EntityFrameworkCore.MySql provider 3.0.
Right now we are trying to replace all database calls from sync to async, like:
//from
dbContext.SaveChanges();
//to
await dbContext.SaveChangesAsync();
Unfortunetly when we do it we expereince two issues:
Number of connections to the server grows significatntly compared to the same tests for sync calls
Average processing speed of our application drops significantly
What is the recommended way to use ef core with mysql asynchronously? Any working example or evidence of using ef-core 3 with MySql asynchonously would be appreciated.
It's hard to say what the issue here is without seeing more code. Can you provide us with a small sample app that reproduces the issue?
Any DbContext instance uses exactly one database connection for normal operations, independent of whether you call sync or async methods.
Number of connections to the server grows significatntly compared to the same tests for sync calls
What kind of tests are we talking about? Are they automated? If so, how many tests are being run? Because of the nature of async calls, if you run 1000 tests in parallel, every test with its own DbContext, you will end up with 1000 parallel connections.
Though with Pomelo, you will not end up additionally with 1000 threads, as you would with using Oracle's provider.
Update:
We test asp.net core call (mvc) which goes to db and read and writes something. 50 threads, using DbContextPool with limit 500. If i use dbContext.SaveChanges(), Add(), all context methods sync, I am land up with around 50 connections to MySql, using dbContext.SaveChangesAsnyc() also AddAsnyc, ReadAsync etc, I end up seeing max 250 connection to MySql and the average response time of the page drops by factor of 2 to 3.
(I am talking about ASP.NET and requests below. The same is true for parallel run test cases.)
If you use Async methods all the way, nothing will block, so your 50 threads are free to handle the next 50 requests while the database is still executing the queries for the first 50 requests.
This will happen again and again because ASP.NET might process your requests faster than your database can return its results. So you will end up with a lot of parallel database queries.
This does not happen when executing the Sync methods, because every thread blocks and you end up with a maximum of 50 parallel queries (one per thread).
So this is expected behavior and just a consequence of async method calls.
You can always modify your code or web server configuration to limit the amount of concurrent ASP.NET requests.
50 threads, using DbContextPool with limit 500.
Also be aware that DbContextPool does not limit how many DbContext objects can concurrently exist, but only how many will be kept in the pool. So if you set DbContextPool to 500, you can create more than 500 contexts, but only 500 will be kept alive after using them.
Update:
There is a very interesting low level talk about lock-free database pool programming from #roji that addresses this behavior and takes your position, that there should be an upper limit in the connection pool that should result in blocking when exceeded and makes a great case for this behavior.
According to #bgrainger from MySqlConnector, that is how it is already implemented (the docs did not explicitly state this, but they do now). The MaxPoolSize connection string option has a default value of 100, so if you use connection pooling and if you don't overwrite this value and if you don't use multiple connection pools, you should not have more than 100 connections active at a given time.
From GitHub:
This is a documentation error, if you are interpreting the docs to mean that you can create an unlimited number of connections.
When pooling is true, each connection pool (there is one per unique connection string) only allows MaximumPoolSize connections to be open simultaneously. Each additional call to MySqlConnection.Open will block until a connection is returned to the pool.
When pooling is false, there is no limit to the number of connections that can be opened simultaneously; it's up to the user to manage the concurrency.
Check to see whether you have Pooling=false in your connection string, as mentioned by Bradley Grainger in comments.
After I removed pooling=false from my connection string, my app ran literally 3x faster.

Manage multi-tenancy ArangoDB connection

I use ArangoDB/Go (using go-driver) and need to implement multi-tenancy, means every customer is going to have his data in a separate DB.
What I'm trying to figure out is how to make this multi-tenancy work. I understand that it's not sustainable to create a new DB connection for each request, means I have to maintain a pool of connections (not a typical connection pool tho). Of course, I can't just assume that I can make limitless, there has to be a limit. However, the more I think about that the more I understand that I need some advice on it. I'm new to Go, coming from the PHP world, and obviously it's a completely different paradigm in PHP.
Some details
I have an API (written in Go) which talks to ArangoDb using arangodb/go-driver. A standard way of creating a DB connection is
create a connection
conn, err := graphHTTP.NewConnection(...)
create client
c, err := graphDriver.NewClient(...)
create DB connection
graphDB, err := p.cl.Database(...)
This works if one has only one DB, and DB connection is created on API's boot up.
In my case it's many, and, as previously suggested, I need to maintain a DB connections pool.
Where it gets fuzzy for me is how to maintain this pool, keep in mind that pool has to have a limit.
Say, my pool is of size 5, abd over time it's been filled up with the connections. A new request comes in, and it needs a connection to a DB which is not in the pool.
The way I see it, I have only 2 options:
Kill one of the pooled connections, if it's not used
Wait till #1 can be done, or throw an error if waiting time is too long.
The biggest unknow, and this is mainly because I've never done anything like this, for me is how to track whether connection is being used or not.
What makes thing even more complex is that DB connection has it's own pool, it's done on the transport level.
Any recommendations on how to approach this task?
I implemented this in a Java proof of concept SaaS application a few months ago.
My approach can be described in a high level as:
Create a Concurrent Queue to hold the Java Driver instances (Java driver has connection pooling build in)
Use subdomain to determine which SaaS client is being used (can use URL param but I don't like that approach)
Reference the correct Connection from Queue based on SaaS client or create a new one if not in the Queue.
Continue with request.
This was fairly trivial by naming each DB to match the subdomain, but a lookup from the _systemdb could also be used.
*Edit
The Concurrent Queue holds at most one Driver object per database and hence the size at most will match the number of databases. In my testing I did not manage the size of this Queue at all.
A good server should be able to hold hundreds of these or even thousands depending on memory, and a load balancing strategy can be used to split clients into different server clusters if scaling large enough. A worker thread could also be used to remove objects based on age but that might interfere with throughput.

Connection pooling and scaling of instances

Note : This is a design related question to which i couldn't find a satisfying answer. Hence asking here.
I have a spring boot app which is deployed in cloud ( Cloud foundry). The app connects to an oracle database to retrieve data. The application uses a connection pool(HikariCp) to maintain the connections to database. Lets say the number of connections is set as 5. Now the application has the capacity to scale automatically based on the load. All the instances will be sharing the same database. At any moment there could 50 instances of the same application running, which means the total number of database connections will be 250 (ie 5 * 50).
Now suppose the database can handle only 100 concurrent connections. In the current scenario, 20 instances will use up the 100 connections available. What will happen if the next 30 instances tries to connect to db? If this is design issue, how can this be avoided?
Please note that the numbers provided in the question are hypothetical for simplicity. The actual numbers are much higher.
Let's say:
Number of available DB connections = X
Number of concurrent instances of your application = Y
Maximum size of the DB connection pool within each instance of your application = X / Y
That's slightly simplistic since you might want to be able to connect to your database from other clients (support tools, for example) so perhaps a safer formula is (X * 0.95) / Y.
Now, you have ensured that your application layer will not encounter 'no database connection exists' issues. However if (X * 0.95) / Y is, say, 25 and you have more than 25 concurrent requests passing through your application which need a database connection at the same time then some of those requests will encounter delays when trying to acquire a database connection and, if those delays exceed a configured timeout, they will result in failed requests.
If you can limit throughput in your application such that you will never have more than (X * 0.95) / Y concurrent 'get database connection' requests then hey presto the issue disappears. But, of course, that's not typically realistic (indeed since less is rarely more ... telling your clients to stop talking to you is generally an odd signal to send). This brings us to the crux of the issue:
Now the application has the capacity to scale automatically based on the load.
Upward scaling is not free. If you want the same responsiveness when handling N concurrent requests as you have when handling 100000N concurrent requests then something has to give; you have to scale up the resources which those requests need. So, if they make use of databaase connections then the number of concurrent connections supported by your database will have to grow. If server side resources cannot grow proprotional to client usage then you need some form of back pressure or you need to carefully manage your server side resources. One common way of managing your server side resources is to ...
Make your service non-blocking i.e. delegate each client request to a threadpool and respond to the client via callbacks within your service (Spring facilitates this via DeferredResult or its Async framework or its RX integration)
Configure your server side resources (such as the maximum number of available connections allowed by your DB) to match the maximum througput from your services based on the total size of your service instance's client-request threadpools
The client-request threadpool limits the number of currently active requests in each service instance it does not limit the number of requests your clients can submit. This approach allows the service to scale upwards (to a limit represented by the size of the client-request threadpools across all service instances) and in so doing it allows the service owner to safe guard resources (such as their database) from being overloaded. And since all client requests are accepted (and delegated to the client-request threadpool) the client requests are never rejected so it feels from their perspective as if scaling is seamless.
This sort of design is further augmented by a load balancer over the cluster of service instances which distributes traffic across them (round robin or even via some mechanism whereby each node reports its 'busy-ness' with that feedback being used to direct the load balancer's behaviour e.g. direct more traffic to NodeA because it is under utilised, direct less traffic to NodeB because it is over utilised).
The above description of a non blocking service only scratches the surface; there's plenty more to them (and loads of docs, blog postings, helpful bits-n-pieces on the Internet) but given your problem statement (concern about server side resources in the face of increasing load from a client) it sounds like a good fit.

Hard limit connections Spring Boot

I'm working on a simple micro service written in Spring Boot. This service will act as a proxy towards another resources that have a hard concurrent connection limit and the requests take a while to process.
I would like to impose a hard limit on concurrent connections allowed to my micro service and rejecting any with either a 503 or on tcp/ip level. I've tried to look into different configurations that can be made for Jetty/Tomcat/Undertow but haven't figured out yet something completely convincing.
I found some settings regulating thread pools:
server.tomcat.max-threads=0 # Maximum amount of worker threads.
server.undertow.io-threads= # Number of I/O threads to create for the worker.
server.undertow.worker-threads= # Number of worker threads.
server.jetty.acceptors= # Number of acceptor threads to use.
server.jetty.selectors= # Number of selector threads to use.
But if understand correctly these are all configuring thread pool sizes and will just result in connections being queued on some level.
This seems like really interesting, but this hasn't been merged in yet and is targeted for Spring Boot 1.5 , https://github.com/spring-projects/spring-boot/pull/6571
Am I out of luck using a setting for now? I could of course implement a filter but would rather block it on an earlier level and not have to reinvent the wheel. I guess using apache or something else in front is also an option, but still that feels like an overkill.
Try to look at EmbeddedServletContainerCustomizer
this gist could give you and idea how to do that.
TomcatEmbeddedServletContainerFactory factory = ...;
factory.addConnectorCustomizers(connector ->
((AbstractProtocol) connector.getProtocolHandler()).setMaxConnections(10000));

What is the purpose of the MaxConnectionLifeTime setting

The Mongo C Sharp Driver (at least the 1.9.2 version) has a setting for MaxConnectionLifeTime. From looking at the code, it looks like connections are removed from the pool when their age exceeds that lifetime. The default is set to 30 minutes.
Why?
Do connections somehow degrade in performance the more times they are used?
We have received anecdotal reports that in some scenarios connections die after a certain amount of time. This is presumably because some firewall/router along the way is periodically dropping connections that have reached a certain age.
By having the driver periodically close connections and open new ones we can avoid being affected by this.
Most users are not affected by this and could use any value they want for this setting.

Resources