How many Queries per second is normal for this site? - performance

I have a small site running Flynax classifieds software. I get 10/15 users concurrent users at the most. Sometimes I get very high load avg that results in outages and downtime problems on my server.
I run
root#host [~]# mysqladmin proc stat
and I see this :
Uptime: 111346 Threads: 2 Questions: 22577216 Slow queries: 5159 Opens: 395 Flush tables: 1 Open tables: 285 Queries per second avg: 202.766
Are 202.766 queries per second is normal for a small site like mine ?!
The hosting company is saying, my app is poorly coded and must be optimized.
The Flynax developers are saying, the server is poor and weak and must be replaced.
I'm not sure what to do? any help is much appreciated.

202.766 queries per second isn't normals for small website you described.
Probably some queries run in a loop and that is why you have such statistics.
As far as I know the latest flynax versions has mysql debug option, using this option
you can see how many queries run on the page and how much time each query executes.
Cheers

Related

Is it normal that CockroachDB Serverless uses 500K RUs in 19 hours with no connections?

I set up a CockroachDB cluster for a school project. The only thing I have done is created 1 database with 1 table with 1 instance of 6 rows, but when I look at the dashboard I have already used 500K RUs. This seems like a huge amount to me, but I'm new to cloud databases so I don't know if this is normal behavior or not. I'm just worried I will run out of RUs without doing anything on the database. In this image the graph of the RU usage can be seen when there are no connections and when the hub wasn't opened. Can anyone maybe clarify this for me?
I think this explanation is more likely to be the reason:
https://www.cockroachlabs.com/docs/cockroachcloud/serverless-faqs.html#my-cluster-doesnt-have-any-current-co[…]ing-rus-when-there-are-no-connections
To summarize, the monitoring console uses up some RUs. So if you have a browser tab open with the console, it will use RUs even if you don't have any connections open.
As that FAQ says, this can use ~8 RUs per second. Over 19 hours, that is about ~540,000 RUs total. The solution is to not leave the console open.
On the stats point, note that auto-stats collection is only triggered when data in the table changes.
I believe what you're seeing is the Automatic Metric collection. You can read more about it on this FAQ.

Same Hasura query executed with hugely different execution times

While sending periodically the same graph query to Hasura server, I have observed significantly different execution times
In one of the these cases, the query was executed under a single seconds, where as in another case the same query took more than 150 seconds. The execution times were captures from the Hasura "http-log" statements.
An additional observation from the corresponding "query-log" statements is that, the SQLs are generated in both cases, within similar times.
Any reason for the generated SQL being executed after a significant and considerable delay compared to the other.
Any specific reason for this inconsistent behaviour and any specific configurations that can be made to overcome this issue.
I don't know if that counts as an answer, it's certainly not "a general case answer" as it reflects only our experience.
We encountered similar problem: inconsistent latencies for the same queries.
Where we looked and what we found.
1. hasura
Hasura itself is a very thin and predictable layer above postgresql (and now other DBs too).
I'm not a haskel expert but I got impression that SQL generation comes from here: https://github.com/hasura/graphql-engine/blob/b2461c5899a881183ad2d269ebe8a2c6f55e46af/server/src-lib/Hasura/GraphQL/Execute/LiveQuery.hs
(I could be wrong and I will be grateful if somebody will correct me)
So:
hasura always generate the same SQL for the same query
this process is predictable
it has a low cost
Conclusion: hasura itself could not be source of different latencies. We need to look on DB level
2. What we encountered on DB level
We build a simple test: running the same query on DB
And we discovered that the same query is running as 100ms-100ms-2 seconds - 150 ms - 3 seconds - 90 ms.
We search for locks - and did not found them.
We looked on buffering - and discovered that almost all DB is cached in memory.
Finally our suspicion was that it's Azure Database (we used cloud postgresql from MS) is misbehaving.
We contacted support (and we had other questions to them) and finally we discovered that we simply hit IOPS limit.
This hypothesis was supported by simple fact: if we run VACUUM/REINDEX/ REFRESH MATERIALIZED VIEW/heavy procedures then DB became much less responsive for an amount of time.
We considered upgrading Azure Database but we had other problems and we wanted to upgrade postgresql version so finally we decided to migrate to Amazon RDS.
(That's not bashing Azure or promoting Amazon, personally I think that running on-premise would be the best)
After that all strange execution times disappeared.
Think yourself how that reflects your case.
In general I recommend to look on DB level only.

Oracle 12c: SQL query hangs forever only occasionally

I have a SQL query that fetches roughly 200 columns from multiple tables and normally runs in a matter of minutes.
A Java program kicked off by cron calls the SQL every 4 hours, but occasionally hangs forever(=not fetching any data. Neither updates nor inserts are involved).
Here are some outputs from V$SESSION.
STATUS: ACTIVE
ROW_WAIT_OBJ#: 22392 ←not changing
ROW_WAIT_FILE#: 6 ←not changing
ROW_WAIT_BLOCK#: 8896642 ←not changing
ROW_WAIT_ROW#: 0 ←not changing
LAST_CALL_ET: 5632 ←keeps incresing
★No other heavy SQL queries are running at the same time
What could be the cause of this and what should I look into to solve it?
You can use TKPROF or SQL Profiler. This reports can help you. We can not replay your question now.
If you attach your tuning reports, we can help you. Because many things can cause performance problems. A comprehensive study is needed to understand this.
Follow this link;
https://docs.oracle.com/cd/E11882_01/server.112/e41573/perf_overview.htm

Azure SQL Data IO 100% for extended periods for no apparent reason

I have an Azure website running about 100K requests/hour and it connects to Azure SQL S2 database with about 8GB throughput/day. I've spent a lot of time optimizing the database indexes, queries, etc. Normally the Data IO, CPU and Log IO percentages are well behaved in the 20% range.
A recent portion of the data throughput is retained for supporting our customers. I have a nightly maintenance procedure that removes obsolete data to manage database size. This mostly works well with the exception of removing image blobs in a varbinary(max) field.
The nightly procedure has a loop that sets 10 records varbinary(max) field to null at a time, waits a couple seconds, then sets the next 10. Nightly total for this loop is about 2000.
This loop will run for about 45 - 60 minutes and then stop running with no return to my remote Sql Agent job and no error reported. A second and sometimes third running of the procedure is necessary to finish setting the desired blobs to null.
In an attempt to alleviate the load on the nightly procedure, I started running a job once every 30 seconds throughout the day - it sets one blob to null each time.
Normally this trickle job is fine and runs in 1 - 6 seconds. However, once or twice a day something goes wrong and I can find no explanation for it. The Data I/O percentage peaks at 100% and stays there for 30 - 60 minutes or longer. This causes the database responsiveness to suffer and the website performance goes with it. The trickle job also reports running for this extended period of time. If I stop the Sql Agent job, it can take a few minutes to stop but the Data I/O continues at 100% for the 30 - 60 minute period.
The web service requests and database demands are relatively steady throughout the business day - no volatile demands that would explain this. No database deadlocks or other errors are reported. It's as if the database hits some kind of backlog limit where its ability to keep up suddenly drops and then it can't catch up until something that is jammed finally clears. Then the performance will suddenly return to normal.
Do you have any ideas what might be causing this intermittent and unpredictable issue? Any ideas what I could look at when one of these events is happening to determine why the Data I/O is 100% for an extended period of time? Thank you.
If you are on SQL DB V12, you may also consider using the Query Store feature to root cause this performance problem. It's now in public preview.
In order to turn on Query Store just run the following statement:
ALTER DATABASE your_db SET QUERY_STORE = ON;

Cognos report performance and cache

I am working on Cognos 8, one of my report take roughly 1 minute to run but sometime 20 seconds as it loads from cache. Now for few needs I want to prove that report ran from cache for second time, how can I prove that? Is the performance is logged some where?
Cognos 8 uses old 32-bit CQM engine.
The cache of this engine is very primitive:
Cache only works in same session.
Only works if the query is identical.
By defualt it cache the last 5 queries.
So based on limitation I wrote above you can do the following:
Run the report in different session (different browser or user or user).
Change any value in the prompt for different value.
This will ensure the report is not running from cache.
if you want to trace performance of queries, then using DB to capture the queries is the most efficient way. The alternative would be activating Congos ipf trace:
Cognos 8 report performance issues

Resources