I've been using heroku for one of my applications and it got shutdown today because the row count has exceeded 10,000 rows.
I don't understanding how this figure is arrived at though, as rails tells me I only have around 2000 records in the db.
Running a pg:info, I see the following:
Plan: Dev
Status: available
Connections: 1
PG Version: 9.1.5
Created: 2012-09-25 03:12 UTC
Data Size: 11.0 MB
Tables: 9
Rows: 15686/10000 (Write access revoked)
Fork/Follow: Unavailable
Can anyone explain to me how I seem to have 15,000 rows despite only have 2,000 records in the database?
Thanks!
Rails alone is not enough. Heroku has a nice SQL console that you can access with:
heroku pg:psql YOUR_DB_URL
then you can write this query to obtain a rank of records per table:
SELECT schemaname,relname,n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
If you need only the updated num. of rows, you can use
SELECT sum(n_live_tup) FROM pg_stat_user_tables;
Please note that you can have both the new dev plan db and the old SHARED one in your config (access it by heroku pg:info). You have to insert the correct db url, probably the one with a color.
Allow a 30 mins delay between any sql truncate and the Rows count to update.
BTW the web console on http://heroku.com in my case was updated with the correct num. during my sql queries. May be heroku toolbelt console updates, are slower.
I contacted Heroku Support on this and they ran the following command to get my numbers...
$ heroku pg:info
30 mins weren't enough for me, so I took a backup and restored the database. Then my app came back online.
Related
I set up a CockroachDB cluster for a school project. The only thing I have done is created 1 database with 1 table with 1 instance of 6 rows, but when I look at the dashboard I have already used 500K RUs. This seems like a huge amount to me, but I'm new to cloud databases so I don't know if this is normal behavior or not. I'm just worried I will run out of RUs without doing anything on the database. In this image the graph of the RU usage can be seen when there are no connections and when the hub wasn't opened. Can anyone maybe clarify this for me?
I think this explanation is more likely to be the reason:
https://www.cockroachlabs.com/docs/cockroachcloud/serverless-faqs.html#my-cluster-doesnt-have-any-current-co[…]ing-rus-when-there-are-no-connections
To summarize, the monitoring console uses up some RUs. So if you have a browser tab open with the console, it will use RUs even if you don't have any connections open.
As that FAQ says, this can use ~8 RUs per second. Over 19 hours, that is about ~540,000 RUs total. The solution is to not leave the console open.
On the stats point, note that auto-stats collection is only triggered when data in the table changes.
I believe what you're seeing is the Automatic Metric collection. You can read more about it on this FAQ.
Sonarqube is showing incorrect total for projects and issues.
Total projects says 4 however there are only 2. Claims there are 78 bugs but there are none, and none get displayed in the results section. (see below)
I've checked the database a grouping the [projects] table by [project_uuid] only returns 2 rows.
Sonarqube v6.2 is being used, with an SQL Server database if that makes any difference. Could this be a setup issue, I only setup this instance a few days ago but I am not sure where to check other than the database where the projects table at least.
When your issue counts are scrambled like this, it means the ElasticSearch index is corrupted.
shut the server down
delete $SONARQUBE_HOME/data/es
start the server back up
Startup will take a little longer because there's an added delay while the index is rebuilt. The duration of this delay is dependent on the size of your instance.
Once your server comes back up, your numbers should be right.
I am trying to update from 4.0 to 4.5.1 but the process always fails at UpdateMeasuresDebtToMinutes. I am using MySQL 5.5.27 as a database with InnoDB as table engine.
Basically the problem looks like this problem
After the writeTimeout exceeds (600 seconds) there is an exception in the log
Caused by: java.io.EOFException: Can not read response from server. Expected to read 81 bytes, read 15 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3166) ~[mysql-connector-java-5.1.27.jar:na]
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3676) ~[mysql-connector-java-5.1.27.jar:na]
Adding the indexes as proposed in the linked issue did not help.
Investigating further I noticed several things:
the migration step reads data from a table and wants to write back to the same table (project_measures)
project_measures contains more than 770000 rows
the process always hangs after 249 rows
the hanging happens in org.sonar.server.migrations.MassUpdate when calling update.addBatch() which after the BatchSession.MAX_BATCH_SIZE (250) forces an execute and a commit
is there a way to configure the DB connection to allow this to proceed?
First of all, could you try to revert your db to 4.0 and try again ?
Then, could you please give us the JDBC url (sonar.jdbc.url) you're using ?
Thanks
As I need that sonar server to run I finally implemented a workaround.
It seems I cannot write to the database at all, as long as a big result set is still open (I tried with a second table but the same issue as before).
Therefore I changed all migrations that need to read and write the project_measurestable (org.sonar.server.db.migrations.v43.TechnicalDebtMeasuresMigration, org.sonar.server.db.migrations.v43.RequirementMeasuresMigration, org.sonar.server.db.migrations.v44.MeasureDataMigration) to load the changed data into a memory structure and after closing the read resultset write it back.
This is as hacky as it sounds and will not work for larger datasets where you would need to this with paging through the data or storing everything into a secondary datastore.
Furthermore I found that later on (in 546_inverse_rule_key_index.rb) an index needs to be created on the rules table which is larger than the max key length on mysql (2 varchar(255) columns with UTF-8 is more than 1000bytes .. ) so I had to limit the key length on that too ..
As I said, it is a workaround and therefore I will not accept it as an answer ..
I am connecting to a remote Oracle DB using MS Access 2010 and ODBC for Oracle driver
IN MS Access it takes about 10 seconds to execute:
SELECT * FROM SFMFG_SACIQ_ISC_DRAWING_REVS
But takes over 20 minutes to execute:
SELECT * INTO saciq_isc_drawing_revs FROM SFMFG_SACIQ_ISC_DRAWING_REVS
Why does it take so long to build a local table with the same data?
Is this normal?
The first part is reading the data and you might not be getting the full result set back in one go. The second is both reading and writing the data which will always take longer.
You haven't said how many records you're retrieving and inserting. If it's tens of thousands then 20 minutes (or 1200 seconds approx.) seems quite good. If it's hundreds then you may have a problem.
Have a look here https://stackoverflow.com/search?q=insert+speed+ms+access for some hints as to how to improve the response and perhaps change some of the variables - e.g. using SQL Server Express instead of MS Access.
You could also do a quick speed comparison test by trying to insert the records from a CSV file and/or Excel cut and paste.
I have a small site running Flynax classifieds software. I get 10/15 users concurrent users at the most. Sometimes I get very high load avg that results in outages and downtime problems on my server.
I run
root#host [~]# mysqladmin proc stat
and I see this :
Uptime: 111346 Threads: 2 Questions: 22577216 Slow queries: 5159 Opens: 395 Flush tables: 1 Open tables: 285 Queries per second avg: 202.766
Are 202.766 queries per second is normal for a small site like mine ?!
The hosting company is saying, my app is poorly coded and must be optimized.
The Flynax developers are saying, the server is poor and weak and must be replaced.
I'm not sure what to do? any help is much appreciated.
202.766 queries per second isn't normals for small website you described.
Probably some queries run in a loop and that is why you have such statistics.
As far as I know the latest flynax versions has mysql debug option, using this option
you can see how many queries run on the page and how much time each query executes.
Cheers