Docs say that I can have 10k rows in my DB. After pg:info I have
=== HEROKU_POSTGRESQL_OLIVE_URL (DATABASE_URL)
Plan: Hobby-dev
Status: Available
Connections: 0/20
PG Version: 9.3.6
Created: 2015-04-16 11:06 UTC
Data Size: 16.7 MB
Tables: 16
Rows: 40302/10000 (Above limits, access disruption imminent)
Fork/Follow: Unsupported
Rollback: Unsupported
I still can add something to my DB, so how this limit works?
It's not a hard limit; you can go above it. However, eventually (usually within a day of exceeding the limit) you will receive an email that you are over the limit, and if you don't delete the excess rows, your web app may be suspended.
Related
I am having a table with around 2 billion rows that i try to query the max(id) from. Id is not the sort key of the table and the table is using the table engine mergeTree.
No matter what I try, I get memory errors. This does not stop with this one query only. As soon as I try to query any table fully (vertical) to find data my 12 gb ram is not enough. Now I know I can just add more but that is not the point. Is it by design that clickhouse just throws an error when it doesn't have enough memory? Is there a setting that tells clickhouse to use disk instead?
SQL Error [241]: ClickHouse exception, code: 241, host: XXXXXX, port: 8123; Code: 241, e.displayText() = DB::Exception: Memory limit (for query) exceeded: would use 9.32 GiB (attempt to allocate chunk of 9440624 bytes), maximum: 9.31 GiB (version 21.4.6.55 (official build))
Alexey Milovidov disagree to put into CH documentation minimum RAM requirements. But I would say that 32 GB is a minimum for production CH.
At least:
You need to lower mark cache because it's 5GB!!!! by default (set it 500MB).
You need to lower max_block_size to 16384.
You need to lower max_threads to 2.
You need to set max_bytes_before_external_group_by to 3GB.
You need to set aggregation_memory_efficient_merge_threads to 1.
For me what worked was to change the maximum server memory usage from 0.9 to 1.2.
<max_server_memory_usage_to_ram_ratio>1.2</max_server_memory_usage_to_ram_ratio>
--> config.xml
Thanks for the reply as it led me ultimately to this.
Can we change archive_lag_target DB parameter value to 1800 in RDS?
I see only allowed values (60,120,180,240,300). Is there any other way to achieve this in Amazon RDS for Oracle?
No, it appears that 60, 120, 180, 240, 300 are the only permitted values.
I tried it via the AWS Command-Line Interface (CLI):
$ aws rds modify-db-parameter-group --db-parameter-group-name oracle --parameters ParameterName=archive_lag_target,ParameterValue=1800
The response was:
An error occurred (InvalidParameterValue) when calling the ModifyDBParameterGroup operation: Value: 1800 is outside of range: 60,120,180,240,300 for parameter: archive_lag_target
From ARCHIVE_LAG_TARGET Oracle documentation:
ARCHIVE_LAG_TARGET limits the amount of data that can be lost and effectively increases the availability of the standby database by forcing a log switch after the specified amount of time elapses.
A 0 value disables the time-based thread advance feature; otherwise, the value represents the number of seconds. Values larger than 7200 seconds are not of much use in maintaining a reasonable lag in the standby database. The typical, or recommended value is 1800 (30 minutes). Extremely low values can result in frequent log switches, which could degrade performance; such values can also make the archiver process too busy to archive the continuously generated logs.
So, it appears that Amazon RDS is forcing a maximum of 5 minutes lag.
I have a same session running in production and UAT.All it does is seslects the data ( around 6k in both environments).Expression transformation (to hard code few columns) and then inserting into a table ( which does not have partitions).
The problem I am facing is PROD session is taking more than 30 minutes where as UAT is done within 5 minutes.
I have backtracked the timining to many days and its following the same pattern.When compared the session properties between the two.There is no difference at all.
When checked the session log its the reading of rows which is taking time(same count and query in UAT also)Could you please let me know how to proceed with this:
PROD:
Severity Timestamp Node Thread Message Code Message
INFO 4/26/2016 11:07:18 AM node02_WPPWM02A0004 WRITER_1_*_1 WRT_8167 Start loading table [FACT_] at: Tue Apr 26 01:37:17 2016
INFO 4/26/2016 11:26:48 AM node02_WPPWM02A0004 READER_1_1_1 BLKR_16019 Read [6102] rows, read [0] error rows for source table [STG_] instance name [STG]
UAT:
Severity Timestamp Node Thread Message Code Message
INFO 4/26/2016 11:40:53 AM node02_WUPWM02A0004 WRITER_1_*_1 WRT_8167 Start loading table [FACT] at: Tue Apr 26 01:10:53 2016
INFO 4/26/2016 11:43:10 AM node02_WUPWM02A0004 READER_1_1_1 BLKR_16019 Read [6209] rows, read [0] error rows for source table [STG] instance name [STG]
Follow the below steps
1) Open the session log and search for 'Busy'
2) Find the Busy statistics which has a very high Busy Percentage
3) if it is with reader , just run the query in production and UA and try to check the retrieval time. If its high in production then there is a need to tune the query or create indexes or create partitions at table level and informatica level etc., (depend on your project limitations)
4) if it is writer try to increase few informatica options like 'Maximum memory allocated for auto memory attributes' and 'Maximum percentage of total memory allowed..." depending on your server configuration
5) Also try to use informatica partitions while loading into target (Provided the target is partitioned on a particular column)
6) Also some times there is a possibility that cache creation takes time due to huge tables being used as lookups( Refer busy percentage of lookup also). In that case also target waits for the rows to come to the writer thread as they are still transforming
we need to tune the lookup by overriding the default query with tuned version of query
Also search for the following keywords
"Timeout based Commit point" - generally occurs when a writer thread waits for long time
"No more lookup cache " - generally occurs whenever there is a huge data and index cache to be build and no space available on the disk as multiple jobs will be running in production utilizing the same cache folder
Thanks and Regards
Raj
Perhaps, you should check the Query's Explain plan in UAT and PROD. Working on the plan in PROD can help. Similar thing happen with me earlier. We checked the SQL plan and found that it is different in prod when compared to UAT. Had to work with the DBA's to change the plan.
i need a simple help..... on my hosting they have recently added a limit for the queries/hours
and my problem is that i have a couple of magentos installation with over 10k products...
for the import i use Magmi and i saw that it has the stats when import... so what i want to know which of theese numbers are the actual query executed ( if there is)
Global Stats
Imported Elapsed Recs/min Attrs/min Last 0.5%
3204 items (100%) 29.4436 6530 398330 0.0721
DB Stats
Requests Elapsed Speed Avg Reqs Efficiency Last 0.5%
70414 17.0054 248441 reqs/min 21.98/item 57.76% 198 reqs
thank you in advance.
Fabio
One thing to keep in mind, implement all the caching you possibly can. HTML blocks caching, APC cache, full page caching (third party module required if you're not on Enterprise) all cache data retrieved from the database. If you're pulling it from cache, you don't need to hit the database till the data needs to be refreshed. This makes the site more responsive and is a win all round.
At the command line in SSH, you can issue the command:
mysqladmin status -u dbuser -pdbpass
dbuser and dbpass being your mysql user and password. It will kick back a line:
Uptime: 1878 Threads: 1 Questions: 8341 Slow queries: 2 Opens: 8525 Flush tables: 1 Open tables: 512 Queries per second avg: 4.441
This gives you your server uptime and average queries per second. This server should have processed approximately 8340 queries in the time the server was up (uptime x queries per sec)
Another way to see what's going on is to use mysql itself
mysql -u dbuser -pdbpass dbname -Bse "show status like 'uptime';"
mysql -u dbuser -pdbpass dbname -Bse "show status like 'queries';"
You could then set up a cron that logs the queries status entry every hour and the queries per hour are the current total queries minus the previous total queries.
I've been using heroku for one of my applications and it got shutdown today because the row count has exceeded 10,000 rows.
I don't understanding how this figure is arrived at though, as rails tells me I only have around 2000 records in the db.
Running a pg:info, I see the following:
Plan: Dev
Status: available
Connections: 1
PG Version: 9.1.5
Created: 2012-09-25 03:12 UTC
Data Size: 11.0 MB
Tables: 9
Rows: 15686/10000 (Write access revoked)
Fork/Follow: Unavailable
Can anyone explain to me how I seem to have 15,000 rows despite only have 2,000 records in the database?
Thanks!
Rails alone is not enough. Heroku has a nice SQL console that you can access with:
heroku pg:psql YOUR_DB_URL
then you can write this query to obtain a rank of records per table:
SELECT schemaname,relname,n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
If you need only the updated num. of rows, you can use
SELECT sum(n_live_tup) FROM pg_stat_user_tables;
Please note that you can have both the new dev plan db and the old SHARED one in your config (access it by heroku pg:info). You have to insert the correct db url, probably the one with a color.
Allow a 30 mins delay between any sql truncate and the Rows count to update.
BTW the web console on http://heroku.com in my case was updated with the correct num. during my sql queries. May be heroku toolbelt console updates, are slower.
I contacted Heroku Support on this and they ran the following command to get my numbers...
$ heroku pg:info
30 mins weren't enough for me, so I took a backup and restored the database. Then my app came back online.