Can we change archive_lag_target DB parameter value to 1800 in RDS?
I see only allowed values (60,120,180,240,300). Is there any other way to achieve this in Amazon RDS for Oracle?
No, it appears that 60, 120, 180, 240, 300 are the only permitted values.
I tried it via the AWS Command-Line Interface (CLI):
$ aws rds modify-db-parameter-group --db-parameter-group-name oracle --parameters ParameterName=archive_lag_target,ParameterValue=1800
The response was:
An error occurred (InvalidParameterValue) when calling the ModifyDBParameterGroup operation: Value: 1800 is outside of range: 60,120,180,240,300 for parameter: archive_lag_target
From ARCHIVE_LAG_TARGET Oracle documentation:
ARCHIVE_LAG_TARGET limits the amount of data that can be lost and effectively increases the availability of the standby database by forcing a log switch after the specified amount of time elapses.
A 0 value disables the time-based thread advance feature; otherwise, the value represents the number of seconds. Values larger than 7200 seconds are not of much use in maintaining a reasonable lag in the standby database. The typical, or recommended value is 1800 (30 minutes). Extremely low values can result in frequent log switches, which could degrade performance; such values can also make the archiver process too busy to archive the continuously generated logs.
So, it appears that Amazon RDS is forcing a maximum of 5 minutes lag.
Related
I look after two RDS client sites, both of which are on Oracle 12.1.0.2.v12.
One of the instances has a high CPU load which appears to be caused by the 12.1 "feature" known as Automatic Report Capturing.
It's a known issue:
https://smarttechways.com/2017/10/11/with-monitor_data-as-select-inst_id-query-found-caused-performance-issue/
https://liups.com/wp-content/uploads/2018/01/Document-2102131.1.pdf
and can ordinarily be disabled by running alter system set "_report_capture_cycle_time"=0;
However, as this is an RDS instance this parameter can't be set.
What's puzzling me is that the other site doesn't appear to have this issue - the specific SQL statement doesn't appear in session history:
WITH MONITOR_DATA AS (SELECT INST_ID, KEY, NVL2(PX_QCSID, NULL, STATUS) STATUS, FIRST_REFRESH_TIME, LAST_REFRESH_TIME, REFRESH_COUNT, PROCESS_NAME, SID, SQL_ID, SQL_EXEC_START, SQL_EXEC_ID, DBOP_NAME, DBOP_EXEC_ID, SQL_PLAN_HASH_VALUE, SQL_FULL_PLAN_HASH_VALUE, SESSION_SERIAL#, SQL_TEXT, IS_FULL_SQLTEXT, PX_SERVER#, PX_SERVER_GROUP, PX_SERVER_SET, PX_QCINST_ID, PX_QCSID, CASE WHEN ELAPSED_TIME < (CPU_TIME+ APPLICATION_WAIT_TIME+ CONCURRENCY_WAIT_TIME+ CLUSTER_WAIT_TIME+ USER_IO_WAIT_TIME+ QUEUIN... (truncated)
It's as though the "feature" has been disabled on the other site somehow - or conversely, it's somehow been inadvertently enabled at the problematic site.
Any ideas on how this can be disabled?
Normally if you have huge amount of CPU consumption, you will probably get error messages in the alert log of this type
Thu Sep 08 04:00:41 2016
Errors in file /app/oracle/diag/rdbms/dbname/dbinstance/trace/dbinstance_m002_14490.trc:
ORA-12850: Could not allocate slaves on all specified instances: 3 needed, 2 allocated
If that is the case, indeed disable the feature can only be done by
alter system set "_report_capture_cycle_time"=0; /* Default is 60 seconds */
Reason:
If the CPU consumption is significantly high then it is not an
expected behaviour and could be due to optimizer choosing suboptimal
plan for the SQL statements. This can happen due to Adaptive
Optimization, a new feature in 12c.
Therefore, if you can't change the hidden parameter, you might try to disable Adaptive Optimization all together
alter system set optimizer_adaptive_features = false scope=both ;
As the documentation states
In 12.1, adaptive optimization as a whole is controlled by the dynamic
parameter optimizer_adaptive_features, which defaults to TRUE. All of
the features it controls are enabled when optimizer_features_enable >=
12.1.
Either you upgrade to 19c, or you disable all optimizer adaptive features
Amazon RDS now supports 19c
"optimizer_adaptive_features" is not related to "_report_capture_cycle_time" anyhow.
I have a complex query that runs a long time (e.g 30 minutes) in Snowflake when I run it in the Snowflake console. I am making the same query from a JVM application using JDBC driver. What appears to happen is this:
Snowflake processes the query from start to finish, taking 30 minutes.
JVM application receives the rows. The first receive happens 30 minutes after the query started.
What I'd like to happen is that Snowflake starts to send rows to my application while it is still executing the query, as soon as data is ready. This way my application could start processing the rows in the first 30 minutes.
Is this possible with Snowflake and JDBC?
First of all, I would request to check the Snowflake warehouse size and do the tuning. It's not worth waiting for 30 mins when by resizing of the warehouse, the query time can be reduced one fourth or less than that. By doing any of the below, your cost will be almost the same or low. The query execution time will be reduced linearly as you increase the warehouse size. Refer the link
Scale up by resizing a warehouse.
Scale out by adding clusters to a warehouse (requires Snowflake
Enterprise Edition or higher).
Now coming to JDBC, I believe it behaves the same way as for other databases as well
I'm using DBeaver to connect to an Oracle database. Database connection and table properties view functions are working fine without any delay. But fetching table data is too slow(sometimes around 50 seconds).
Any settings to speed up fetching table data in DBeaver?
Changing following settings in your oracle db connection will be faster fetching table data than it's not set.
Right click on your db connection --> Edit Connection --> Oracle properties --> tick on 'Use RULE hint for system catalog queries'
(by default this is not set)
UPDATE
In the newer version (21.0.0) of DBeaver, many more performance options appear here. Turning on them significantly improves the performance for me
I've never used DBeaver, but I often see applications which use too small an "array fetch size"**, which often poses fetch issues.
** Array fetch size note:
As per the Oracle documentation the Fetch Buffer Size is an application side memory setting that affects the number of rows returned by a single fetch. Generally you balance the number of rows returned with a single fetch (a.k.a. array fetch size) with the number of rows needed to be fetched.
A low array fetch size compared to the number of rows needed to be returned will manifest as delays from increased network and client side processing needed to process each fetch (i.e. the high cost of each network round trip [SQL*Net protocol]).
If this is the case, you will likely see very high waits on “SQLNet message from client” [in gv$session or elsewhere].
SQLNet message from client
This wait event is posted by the session when it is waiting for a message from the client to arrive. Generally, this means that the session is just sitting idle, however, in a Client/Server environment it could also means that either the client process is running slow or there are network latency delays. The database performance is not degraded by high wait times for this wait event.
I just set up a db.t2.micro instance on Amazon's AWS. I am using sinatra to load a localhost webpage. I am using Active Record to do maybe about 30~ queries and it's taking 92 seconds to load. It's extremely slow. I tried doing custom parameters as listed here: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html#CHAP_BestPractices.PostgreSQL
This didn't help speed anything up. I'm not sure how I can speed up this instance. This is my first time hosting a database. Any help would be appreciated.
When I run my sinatra app it host locally(localhost). Here is where the 30~ queries are taking 92 seconds to load. When I run select * statements in Postgres they take only a couple seconds.
The problem is the latency between you and Amazon's data center.
For example when you are in New York and your RDS instance is in Amazon's data center on the west coast, then the latency between you and the data center is about 80-100ms. That means when your local application sends a query to the database then it takes about 100ms before the database receives the query. To return the answer it takes again an additional 100ms.
That said: Assume a roundtrip takes 300ms and you have ~30 queries then your application loses about 10 seconds doing nothing – just waiting for data being sent through the wire. And there are other factors that might slow down this even more: Big packets or lost packets (the server has to ask again), bad internet connections, wireless connections, the distance between you and the database being longer than my example.
Therefore the database should be as near as possible to the application server in the same data center to minimize latency.
I have an Azure website running about 100K requests/hour and it connects to Azure SQL S2 database with about 8GB throughput/day. I've spent a lot of time optimizing the database indexes, queries, etc. Normally the Data IO, CPU and Log IO percentages are well behaved in the 20% range.
A recent portion of the data throughput is retained for supporting our customers. I have a nightly maintenance procedure that removes obsolete data to manage database size. This mostly works well with the exception of removing image blobs in a varbinary(max) field.
The nightly procedure has a loop that sets 10 records varbinary(max) field to null at a time, waits a couple seconds, then sets the next 10. Nightly total for this loop is about 2000.
This loop will run for about 45 - 60 minutes and then stop running with no return to my remote Sql Agent job and no error reported. A second and sometimes third running of the procedure is necessary to finish setting the desired blobs to null.
In an attempt to alleviate the load on the nightly procedure, I started running a job once every 30 seconds throughout the day - it sets one blob to null each time.
Normally this trickle job is fine and runs in 1 - 6 seconds. However, once or twice a day something goes wrong and I can find no explanation for it. The Data I/O percentage peaks at 100% and stays there for 30 - 60 minutes or longer. This causes the database responsiveness to suffer and the website performance goes with it. The trickle job also reports running for this extended period of time. If I stop the Sql Agent job, it can take a few minutes to stop but the Data I/O continues at 100% for the 30 - 60 minute period.
The web service requests and database demands are relatively steady throughout the business day - no volatile demands that would explain this. No database deadlocks or other errors are reported. It's as if the database hits some kind of backlog limit where its ability to keep up suddenly drops and then it can't catch up until something that is jammed finally clears. Then the performance will suddenly return to normal.
Do you have any ideas what might be causing this intermittent and unpredictable issue? Any ideas what I could look at when one of these events is happening to determine why the Data I/O is 100% for an extended period of time? Thank you.
If you are on SQL DB V12, you may also consider using the Query Store feature to root cause this performance problem. It's now in public preview.
In order to turn on Query Store just run the following statement:
ALTER DATABASE your_db SET QUERY_STORE = ON;