SCIM Wso2is performance - performance

I'm using WSO2IS version 5.3 with MySQL using mysql-connector-java-5.1.44-bin driver and DB Size of 220K users . when using SCIM to change the attributes for a user takes ~ 4.6 secs versus reading/creating a user takes ~ 1.1 sec. Any Suggestion for lowering those times?

You can have a look WSO2 SCIM reference architecture from [1]. There you can see
SCIM user manager which is a wrapper of Carbon user manager and WSO2 Charon which is responsible for decoding SCIM request and encoding SCIM response. There are three level of bottle neck can happen
Level 01: Charon SCIM request and response, decoding and encoding
Level 02: How SCIM user manager utilize Carbon user manager functions to do user store operations
Level 03: Actual user store operations need to with underlaying user store.
Some of tips to isolate performance issue
Do user update operation from management console and check latency if same latency is there issue is with underlaying user store
Disable user store operation event listeners from identity.xml
[1] https://docs.wso2.com/display/IS500/WSO2+Identity+Server+as+a+SCIM+Service+Provider

Adding to #Gayan's tips to isolate the performance issue.
You can enable JDBC logging with log4jdbc and monitor the time taken to execute each DB query. Then you may be able to narrow down the issue, whether it is in the DB interaction or not.

Related

OIDC Disconnect in Nifi

I have set up a OIDC on my nifi standalone instance, it works great and all but if i idle for more than 5 mintues it redirects me to an Unauthorized window message and says
"Unknown user with identity anonymous".
Refreshing solves this, however, is there a way to make sure to extend the connection, or a workaround to avoid these disconnects?
In case anyone is struck with a similar problem,
my solution is to overwrite the NAR file responsible to OIDCAuthentication (just increasing the expiration timer hehe)
its not a pleasant solution but it will do for the time being, until nifi supports refresh tokens or adds a feature to customize session duration.
stay updated at
https://issues.apache.org/jira/browse/NIFI-4890
It depends on used Identity Provider (IdP) - it generates token with preconfigured time validity. Usually, it can be configured on the client configuration level, but it is recommended to have short time validity. OIDC offers option how to renew access token, but it depends on used flow. It can be refreshed via refresh tokens (grant code flow) or silent refresh (implicit flow). It is not clear which IdP and flow is used in your case, so you can get only these general recommendation.

Google's RuntimeConfig API responds with 'Our systems have detected unusual traffic from your computer network'

Since today (november 20 2018) we get error responses from Google's RuntimeConfig API:
Our systems have detected unusual traffic from your computer network. This page checks to see if it's really you sending the requests, and not a robot...
(check this link for complete HTML error)
We retrieve variables from Google's RuntimeConfig using the API in our code. We do quite a few request, but not more than before:
A developer starts his server locally, which retrieves all the needed variables (+- 30 everytime you start).
Requesting RuntimeConfig variables via GCloud results in the same HTML error:
gcloud beta runtime-config configs variables get-value databaseHost --config-name database --project=your-test-environment
Other gcloud api requests work (projects describe, gsutil, etc).
How can I verify if I violated any terms? I can only find a usage limit in GCloud Console of 6000 calls per minute.
You can find the quotas for Runtime Configurator and how much of those you are using in the Cloud Console under IAM & Admin. In the Quotas section you can filter on Service = Cloud Runtime Configuration API and you should see all the quotas and how close to those you are for this API. There are 4 quotas that may affect you (docs here):
1200 Queries Per Minute (QPM) for delete, create, and update requests
600 QPM for watch requests
6000 QPM for get and list requests.
4MB of data per project, which consists of all data written to the Runtime Configurator service and accompanying metadata.
We had the exact same issue on November 20th when a large amount of our preemptibles were reallocated at the same time.
Our startup-scripts make use of the gcloud beta runtime-config...-commands and they all responded with 503.
These commands responded correctly again after a few hours.
We have had a support-ticket with Google and there was a problem with their internal quota mechanisms at the time which since is fixed so the issue is resolved.

Gmail-API request quota at a user level

Note: This question is about something that I do not understand in the documentation here:
https://developers.google.com/gmail/api/v1/reference/quota#concurrent_requests
Concurrent Requests
The Gmail API enforces a per-user concurrent request limit (in
addition to the per-user rate limit). This limit is shared by all
Gmail API clients accessing a given user and ensures that no API
client is overloading a Gmail user mailbox or their backend server.
enforces a per-user concurrent request limit (in addition to the per-user rate limit).
I do not find what is the 'per-user concurrent request' anywhere in their documentation. Whereas the per-user rate limit is found at the top in the same page.
https://developers.google.com/gmail/api/v1/reference/quota#top_of_page
The Gmail API enforces a per-user concurrent request limit (in addition to the per-user rate limit). This limit is shared by all Gmail API clients accessing a given user and ensures that no API client is overloading a Gmail user mailbox or their backend server.
The confusion here is the difference between per-user concurrent request limit and per-user rate limit
Lets say I make an app that lets users read from their Gmail account. I am going to be limited by the number of request each user can make though MY app the limit is per-user rate limit
Now lets say the user installs your app which also allows them to access their Gmail account. You are also limited to how fast the user can access the api via the per-user rate limit.
However both of our APPs and the gmail and inbox apps are are all running with the same per-user concurrent request limit concurrent meaning across all of the apps the user is using.
the per-user concurrent request limit is probably there to ensure that a developer doesn't create a number of different projects and rip data using all of them.
To my knowledge per-user concurrent request limit is not documented its a stealth limit and i have never seen anyone who has been able to nail down exactly what the numbers are for the concurrent limits in Google APIs. With the exception of the Google Analytics API which is 10000 not including the google analytics website and the official mobile apps
Example:
per-user rate limit
User number one logs in and lists all of his emails he has 10 user gets no error.
User number two logs in, he has 1000 emails and your application tries
to select them all out in 1 second. You are going to get a rate limit error for this user. You are flooding google.
However apps by other developers will be able to access the user inbox via the API
per-user concurrent
user number one is only running your application. logs in and lists all of his emails he has 10 user gets no error.
User number two is running your application and 20 other applications by other developers. Tries lists all of his emails assuming he has done this on all the applications at the same time he may end up getting an error.
These errors are user based due to the name Per-user in the name of the limit.
Example 2:
Lets look at the Google analytics API because i know the hard numbers for this api.
A user using your app can max make 100 requests over 90 seconds. (User-app based)
An application can make max 50000 requests a day. (App Based)
An all Applications can max make 10000 requests a day against a view (concurrent app based)

IBM MobileFirst Analytics Console Total Sessions number with session independent mode configured

I'm working in a project with session independent mode configured (worklight.properties file). In my Analytics Console I can see 68 total adapter calls but the total sessions number shows 0. Is this behavior right? I think must have 1 session created at least.
In this link I found information related, however 0 sessions versus 68 adapter calls sounds rare.
The behaviour you observe is expected. This is because in session independent mode, session count increases when a protected resource is invoked and an OAuth token is issued from the server. This does not appear to be happening in your case with the use of WL.Client.invokeProcedure().
If you use WLResourceRequest API to invoke your adapter resources , the session count will increase on the first resource request as a new token will be issued from the server. More details on the API in this link.
Session count will not increase again until the token expires, a protected resource is called, and a new token is issued from the server.
The information is available in the link you have already referenced.

Performance improvement for web services

We have a webservice, which will be called to provide the delivery date of the product, while purchasing in eComm website.
We are using IBM Sterling Order Management in the backend, and its OOB webservice and its OOB service.
This webservice (WSDL) is taking more time, more than 40 seconds, which create timeoutexception in other integrated systems (Middleware).
So we want to improve the performance of this webservice. Could you please help me to provide the way to improve the performance ? Will it be improved if the Server's spec has been upgraded ? As it the OOB service, we can't customize it.
First of all you need to figure out the performance bottleneck. To start with you could put a verbose trace on the OOB Webservice. Use the logs and see if you can zero-in on any particular component or sql taking consuming majority of the time. If it's sql, you can tune/baseline the OOB query/tables using indexes.
If you have any user exits implemented (for the OOB API), ensure that they are lean and aren't making any expensive API calls like changeOrder API.
One of the questions to be asked here would be if the webservice needs to respond with the actual processing results or if it could move the actual processing to the background eg: separate integration server and just respond with a simple acknowledgement of the webservice request. If the service only needs to respond with an acknowledgement you could possibly move the actual processing to a separate async service.
First try to find out where the actual problem is and hence here the few pointers,
1) Check in OMS how much time the service is taking with the same input which you are using ti invoke the webservice.
2) If from OMS end response time is fine then check the network latency/bandwidth.
3) CPU usage while hitting the webservice.

Resources