I would like to understand why CakePHP is calling SHOW FULL COLUMNS when debug level is set to 0. This is causing a huge performance impact on my database. I am running Cake 2.4.2 in my env.
Appreciate any advises.
Thanks,
Andre
Related
I am having a problem with the Oracle Rest Data Services (short ORDS) and I can't find a solution.
The Problem is as follows:
We are using ORDS via a TomCat Webserver and I have 2 Endpoints defined, one to Update a dataset and one to get all datasets from this table.
If I update the value via my Endpoint the change is written in the Table, but if I try to get the table with this change ORDS only response with the old not changed table. After a certain period of Time while constantly trying to get the change it repondes with the expected values. (happens after max 1 minute, can be earlier).
Because of this behaviour I accused some type of caching, but I cant find no configuration in the oracle database or on the TomCat.
Another Point for this theory was that I logged what happens in my GET procedure and found that only the one request with the correct values gets logged, like the others didnt even happen ..
The Request giving me the old value are coming back in the 4-8 ms range while the request with the correct data is in the 100-200 ms.
Ty for your help :)
I tried logging what happens, but I got that only the request with the fresh values was logged.
I tried to restart the TomCat Webserver to make sure that the cache is cleared, but this didnt fix the Problem
I searched for a configuration in ORDS or oracle where a cache would be defined, but it was never set.
I tried to set the value via a SQL update and not an endpoint, but even here I get the change only delayed
Do you have a full overview of the communication path? Maybe there is a proxy between?
When the TomCat has no caching configuration and you restartet the webserver during your tests and still have the same issue, then there is maybe more...
Kind regards
M-Achilles
I have a client who use the built in feedback system to get his notes, but suddenly when I tries to go to the feedback I got the error as described in the image below.
I tried to delete cookies in the system and nothing happens, is there any solutions.
Regards
We are looking at upgrading our CouchDB on our RHEL servers from 1.6.1 to 2.1.1. Before we do that, though, we wanted to run a performance test. So we created a JMeter test that goes directly against the database. It does not use any random values, so that the test would be exactly the same, and we could compare the two results. This is just a standalone server, we are not using clustering. I ran the tests the exacts same way for both. I ran the tests for 1.6.1, and then installed 2.1.1 on the same machine. And I created the database new for each test run. [I also updated Erlang to R19.3.]
The results were very shocking:
Average response times:
1.6.1: 271.15 ms
2.1.1: 494.32 ms
POST and PUTs were really bad ...
POST:
1.6.1: 38.25 ms
2.1.1: 250.18 ms
PUT:
1.6.1: 37.33 ms
2.1.1: 358.76
We are just using the default values for all the config options, except that we changed 1.6.1 to have delayed_commits = false (that is now the default in 2.1.1). I'm wondering if there's some default that changed that would make 2.1.1 so bad.
When I ran the CouchDB setup from the Fauxton UI, it added the following to my local.ini:
[cluster]
n = 1
Is that causing CouchDB to try to use clustering, or is that the same as if there were no entries here at all? One other thing, I deleted the _global_changes database, since it seemed as if it would add extra processing that we didn't need.
Is that causing CouchDB to try to use clustering, or is that the same as if there were no entries here at all?
It's not obvious from your description. If you setup CouchDB 2.0 as clustered then that's how it will work. This is something you should know depending on the setup instructions you followed: http://docs.couchdb.org/en/2.1.1/install/setup.html
You can tell by locating the files on disk and seeing if they are in a shards directory or not.
I'm pretty sure you want at least two, so setting n = 1 doesn't seem like something you should be doing.
If you're trying to run in single node follow the instructions I linked above to do that.
One other thing, I deleted the _global_changes database, since it seemed as if it would add extra processing that we didn't need.
You probably don't want to delete random parts of your database unless there are instructions saying this is OK?
Some bulks needed have over 100K issues.
Is there a way to bulk change more than the 500 issues in SonarQube 6.x?
The UI certainly does not allow to customize this. Where can I find the parameter/code/table in the database needed to change the 500 value ?
It's not possible to do a bulk change of more than 500 issues, but you can update more than 500 using web services :
Get list of issues using api/issues/search?p=1&ps=500
Do the bulk change using api/issues/bulk_change?issues=ISSUE1,ISSUE2,... on found issues.
Repeat these 2 steps updating the p parameter of api/issues/search each time.
I have a problem that affects the response time of the queries from CassandraInput. I use Datastax Enterprise 3.2.4 - Cassandra 1.2.13.2.
If I try to run the same query (any) directly from the Cassandra client, the answer is considerably faster than the same query executed on the node CassandraInput from Pentaho Data Integration.
What can cause this?
And above all, there is a way to improve the response time from the node CassandraInput in Pentaho?
I hope that some of you might have some suggestions.
Thank you
Federica
Generally it has not to happen.
Try below thing and check whether performance in increasing or not.
open spoon.bat or spoon.sh file according to the OS you are using and change below thing.
Below thing has to be change according to the size of the RAM of your Machine.
PENTAHO_DI_JAVA_OPTIONS="-Xmx2g"