i need a simple help..... on my hosting they have recently added a limit for the queries/hours
and my problem is that i have a couple of magentos installation with over 10k products...
for the import i use Magmi and i saw that it has the stats when import... so what i want to know which of theese numbers are the actual query executed ( if there is)
Global Stats
Imported Elapsed Recs/min Attrs/min Last 0.5%
3204 items (100%) 29.4436 6530 398330 0.0721
DB Stats
Requests Elapsed Speed Avg Reqs Efficiency Last 0.5%
70414 17.0054 248441 reqs/min 21.98/item 57.76% 198 reqs
thank you in advance.
Fabio
One thing to keep in mind, implement all the caching you possibly can. HTML blocks caching, APC cache, full page caching (third party module required if you're not on Enterprise) all cache data retrieved from the database. If you're pulling it from cache, you don't need to hit the database till the data needs to be refreshed. This makes the site more responsive and is a win all round.
At the command line in SSH, you can issue the command:
mysqladmin status -u dbuser -pdbpass
dbuser and dbpass being your mysql user and password. It will kick back a line:
Uptime: 1878 Threads: 1 Questions: 8341 Slow queries: 2 Opens: 8525 Flush tables: 1 Open tables: 512 Queries per second avg: 4.441
This gives you your server uptime and average queries per second. This server should have processed approximately 8340 queries in the time the server was up (uptime x queries per sec)
Another way to see what's going on is to use mysql itself
mysql -u dbuser -pdbpass dbname -Bse "show status like 'uptime';"
mysql -u dbuser -pdbpass dbname -Bse "show status like 'queries';"
You could then set up a cron that logs the queries status entry every hour and the queries per hour are the current total queries minus the previous total queries.
Related
We run Mondrian (version "3.7.preview.506") on a Tomcat Webserver.
We have some long running MDX-queries.
For example: The first calculation takes 276.764 ms and sends 84 SQL requests to the database (30 to 700ms for each SQL statement).
We see that the SQL-Statements are not executed in parallel - only two "mondrian.rolap.agg.SegmentCacheManager$sqlExecutor" are running at the same time.
Is there a way to force Mondrian/olap4j to execute the SQL statments more in parallel?
What is about the property "mondrian.rolap.maxSqlThreads" which is set to 100 by default?
Afterwards we execute the same MDX query and the calculation is finished in 4.904 ms.
Conclusion - if the "internal cache" (mondrian.rolap.agg.SegmentCacheManager) has loaded the segments the calculation is executed without any database request - but ...
3.How can we "warm up" the internal cache?
One way we tried was to rewrite the MDX-queries - we load several month into the cache by once (MDX-B):
MDX-A: SELECT ... ON ROWS FROM cube01 WHERE {[Time].[Default].[2017].[4]}
becomes
MDX-B: SELECT ... ON COLUMNS, CrossJoin( ... ,{[Time].[Default].[2017].[2]:[Time].[Default].[2017].[4]})" + " ON ROWS FROM cube01
The rewriten MDX query takes 1.235.128 ms (244 SQL requests) - afterwards we execute our orgin MDX query (MDX-A) and the calculating takes 6.987 ms
- the interessting part for us was, that the calculation takes longer as 5 sec. (compared with the second execution of the same query),
even if we did not have any SQL request anymore.
The warm-up of the cache does not work as expected (in our opinion) - MDX-B takes much longer to collect data with one statement, as we would run the the monthly execution in three steps (Febrary to April) - and the calculation in memory also takes more time - why - how does loading segmentation really works?
What is the best practice to load the segments to speed up calculation in memory?
Is there a way to feed the "Mondrian-Cube" with simple SQL statements?
Thanks in advance.
Fact table with 3.026.236 rows - growing daily
6 dimension tables.
Date dimension table 21.183 rows.
We have monitored our test classes with JVM's VisualAdmin.
Mondrian 3.7.preview.506 - olap4j-1.1.0
Database: Oracle Database 11g Release 11.2.0.4.0 - 64bit
(we tried to use also memSQL database, we was only 50% faster ...)
We have a large user base in our Oracle Identity Manager system. We have over 0.5 million records in USR table. We have our trusted reconciliation scheduled jobs running every 2 hours. While running trusted reconciliation scheduled jobs for LDAP and FlatFile, OIM is firing a search query on USR table everytime to list all active users. Due to large user base, this query takes a lot of time and our scheduled job which is supposed to bring less than 100 insers/updates takes around 1 hour to complete. Is there a way to optimize it? I have gone through the OIM optimizations guide and have done all the optimizations suggested by Oracle which includes putting USR table in default buffer pool. Any suggestions would be appreciated.
Thanks.
I am connecting to a remote Oracle DB using MS Access 2010 and ODBC for Oracle driver
IN MS Access it takes about 10 seconds to execute:
SELECT * FROM SFMFG_SACIQ_ISC_DRAWING_REVS
But takes over 20 minutes to execute:
SELECT * INTO saciq_isc_drawing_revs FROM SFMFG_SACIQ_ISC_DRAWING_REVS
Why does it take so long to build a local table with the same data?
Is this normal?
The first part is reading the data and you might not be getting the full result set back in one go. The second is both reading and writing the data which will always take longer.
You haven't said how many records you're retrieving and inserting. If it's tens of thousands then 20 minutes (or 1200 seconds approx.) seems quite good. If it's hundreds then you may have a problem.
Have a look here https://stackoverflow.com/search?q=insert+speed+ms+access for some hints as to how to improve the response and perhaps change some of the variables - e.g. using SQL Server Express instead of MS Access.
You could also do a quick speed comparison test by trying to insert the records from a CSV file and/or Excel cut and paste.
I have a small site running Flynax classifieds software. I get 10/15 users concurrent users at the most. Sometimes I get very high load avg that results in outages and downtime problems on my server.
I run
root#host [~]# mysqladmin proc stat
and I see this :
Uptime: 111346 Threads: 2 Questions: 22577216 Slow queries: 5159 Opens: 395 Flush tables: 1 Open tables: 285 Queries per second avg: 202.766
Are 202.766 queries per second is normal for a small site like mine ?!
The hosting company is saying, my app is poorly coded and must be optimized.
The Flynax developers are saying, the server is poor and weak and must be replaced.
I'm not sure what to do? any help is much appreciated.
202.766 queries per second isn't normals for small website you described.
Probably some queries run in a loop and that is why you have such statistics.
As far as I know the latest flynax versions has mysql debug option, using this option
you can see how many queries run on the page and how much time each query executes.
Cheers
I've been using heroku for one of my applications and it got shutdown today because the row count has exceeded 10,000 rows.
I don't understanding how this figure is arrived at though, as rails tells me I only have around 2000 records in the db.
Running a pg:info, I see the following:
Plan: Dev
Status: available
Connections: 1
PG Version: 9.1.5
Created: 2012-09-25 03:12 UTC
Data Size: 11.0 MB
Tables: 9
Rows: 15686/10000 (Write access revoked)
Fork/Follow: Unavailable
Can anyone explain to me how I seem to have 15,000 rows despite only have 2,000 records in the database?
Thanks!
Rails alone is not enough. Heroku has a nice SQL console that you can access with:
heroku pg:psql YOUR_DB_URL
then you can write this query to obtain a rank of records per table:
SELECT schemaname,relname,n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
If you need only the updated num. of rows, you can use
SELECT sum(n_live_tup) FROM pg_stat_user_tables;
Please note that you can have both the new dev plan db and the old SHARED one in your config (access it by heroku pg:info). You have to insert the correct db url, probably the one with a color.
Allow a 30 mins delay between any sql truncate and the Rows count to update.
BTW the web console on http://heroku.com in my case was updated with the correct num. during my sql queries. May be heroku toolbelt console updates, are slower.
I contacted Heroku Support on this and they ran the following command to get my numbers...
$ heroku pg:info
30 mins weren't enough for me, so I took a backup and restored the database. Then my app came back online.