LOGMINER logging in alert log - oracle

I have a business process that uses Oracle LogMiner periodically and I notice that every time the LogMiner API is invoked by this process, the database writes a number of entries to the alert log. Is there any way to customize or reduce the verbosity of LOGMINER so that it doesn't write these log entries every time an ad-hoc session is started and a query is executed?
An example of the entries I'd like to avoid are as follows:
2022-09-08T13:25:27.747284+00:00
LOGMINER: summary for session# = 2148178945
LOGMINER: StartScn: 6452026 (0x000000000062733a)
LOGMINER: EndScn: 6452099 (0x0000000000627383)
LOGMINER: HighConsumedScn: 0
LOGMINER: PSR flags: 0x0
LOGMINER: Session Flags: 0x4000441
LOGMINER: Session Flags2: 0x0
LOGMINER: Read buffers: 4
LOGMINER: Region Queue size: 256
LOGMINER: Redo Queue size: 4096
LOGMINER: Memory LWM: limit 10M, LWM 12M, 80%
LOGMINER: Memory Release Limit: 0M
LOGMINER: Max Decomp Region Memory: 1M
LOGMINER: Transaction Queue Size: 1024
2022-09-08T13:25:27.755395+00:00
LOGMINER: Begin mining logfile for session -2146788351 thread 1 sequence 48, /opt/oracle/oradata/ORCLCDB/redo03.log

Related

Clickhouse Exception: Memory limit (total) exceeded

Attempting to connect Clickhouse to replicate data from PostgreSQL using https://clickhouse.com/docs/en/engines/database-engines/materialized-postgresql/. Any ideas on how to solve the error or what's the best way to replicate PostgreSQL data to Clickhouse?
CREATE DATABASE pg_db
ENGINE = MaterializedPostgreSQL('localhost:5432', 'dbname', 'dbuser', 'dbpass')
SETTINGS materialized_postgresql_schema = 'dbschema'
Then running SHOW TABLES FROM pg_db; doesn't show all tables (missing large tables that has 800k rows). When attempting to attach that large table using ATTACH TABLE pg_db.lgtable;, gets an error below:
Code: 619. DB::Exception: Failed to add table lgtable to replication.
Info: Code: 241. DB::Exception: Memory limit (total) exceeded: would
use 1.75 GiB (attempt to allocate chunk of 4219172 bytes), maximum:
1.75 GiB. (MEMORY_LIMIT_EXCEEDED) (version 22.1.3.7 (official build)). (POSTGRESQL_REPLICATION_INTERNAL_ERROR) (version 22.1.3.7 (official
build))
I've tried increasing allocated memory and adjusting other settings, but still getting the same problem.
set max_memory_usage = 8000000000;
set max_memory_usage_for_user = 8000000000;
set max_bytes_before_external_group_by = 1000000000;
set max_bytes_before_external_sort = 1000000000;
set max_block_size=512, max_threads=1, max_rows_to_read=512;

how to know who and how predefined database service names have been used in Oracle Autonomous Transaction Processing

According to this documentation ,
https://docs.oracle.com/en/cloud/paas/atp-cloud/atpug/connect-predefined.html#GUID-9747539B-FD46-44F1-8FF8-F5AC650F15BE
Autonomous Transaction Processing provides 5 predefined database service names;
tpurgent
tp
high
medium
low
I would like to know how these service names have been used, incuding number connections and how resources have been consumed.
You can try with some SQL statement to query V$SERVICE_STATS (you need to adapt service_name). For example:
SQL> select service_name, name, value
2 from v$service_stats
3 join v$statname
4 on v$service_stats.stat_id = v$statname.stat_id
5 where service_name='SYS$USERS'
6 order by service_name, name;
SERVICE_NAME NAME VALUE
-------------------- ----------------------------------- ----------
SYS$USERS DB time 26317254
SYS$USERS application wait time 379
SYS$USERS cluster wait time 0
SYS$USERS concurrency wait time 20112125
SYS$USERS db block changes 1637
SYS$USERS execute count 7221
SYS$USERS gc cr block receive time 0
SYS$USERS gc cr blocks received 0
SYS$USERS gc current block receive time 0
SYS$USERS gc current blocks received 0
SYS$USERS logons cumulative 108
SERVICE_NAME NAME VALUE
-------------------- ----------------------------------- ----------
SYS$USERS opened cursors cumulative 11078
SYS$USERS parse count (total) 1488
SYS$USERS parse time elapsed 13400006
SYS$USERS physical reads 3643
SYS$USERS physical writes 0
SYS$USERS redo size 248932
SYS$USERS session cursor cache hits 10309
SYS$USERS session logical reads 56879
SYS$USERS user I/O wait time 65219837
SYS$USERS user calls 293
SYS$USERS user commits 7
SERVICE_NAME NAME VALUE
-------------------- ----------------------------------- ----------
SYS$USERS user rollbacks 1
SYS$USERS workarea executions - multipass 0
SYS$USERS workarea executions - onepass 0
SYS$USERS workarea executions - optimal 2557
26 rows selected.

oracle control file and undo databasefile was deleted,Is there a way to get back?

Recreate control file,this is the code
CREATE CONTROLFILE REUSE DATABASE "ORCL" RESETLOGS NOARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 '/home/oracle/app/oradata/orcl/redo01.log' SIZE 50M,
GROUP 2 '/home/oracle/app/oradata/orcl/redo02.log' SIZE 50M,
GROUP 3 '/home/oracle/app/oradata/orcl/redo03.log' SIZE 50M
DATAFILE
'/home/oracle/app/oradata/orcl/osc_zb.dbf',
......
CHARACTER SET ZHS16GBK;
After then open database,the result is as follows:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/home/oracle/app/oradata/orcl/system01.dbf'
recover datafile 1:
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.
then,use hidden parameters to start database.
undo_management='manual'
undo_tablespace='UNDOTBS01'
_allow_resetlogs_corruption=true
also don't work:
SQL> startup pfile=/home/oracle/initoracle.ora
ORACLE instance started.
Total System Global Area 1586708480 bytes
Fixed Size 2253624 bytes
Variable Size 973081800 bytes
Database Buffers 603979776 bytes
Redo Buffers 7393280 bytes
Database mounted.
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/home/oracle/app/oradata/orcl/system01.dbf'
Such a cycle
SQL> recover datafile 1
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.
I hava no idea to restore database,moguls,help me
Can start to mounted status?Maybe You can try following method。
first,find the 'CURRENT' redo groups.
select group#,sequence#,status,first_time,next_change# from v$log;
And find the redo file location
select * from v$logfile;
Then,through this redo log to recover database
SQL> recover database until cancel using backup controlfile;
ORA-00279: change 4900911271334 generated at 03/06/2018 05:46:29 needed for
thread 1
ORA-00289: suggestion :
/home/wonders/app/wonders/flash_recovery_area/ORCL/archivelog/2018_03_12/o1_mf_1
_4252_%u_.arc
ORA-00280: change 4900911271334 for thread 1 is in sequence #4252
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/home/wonders/app/wonders/oradata/orcl/redo01.log
Log applied.
Media recovery complete.
Finally,open database with ‘RESETLOGS’

Magento 1.7 / 1.8 deadlocks from index_process table

I'm having greate problems with Magento last friday we upgraded Magento from 1.7 to 1.8..
The issue is that we're having a lot of deadlocks in the MySQL database.
Our server setup is
1 Load Balancer
4 Webservers (Apache, PHP5, APC)
2 MySQL Servers (64 GB Ram, 30 cores SSD HDD) - 1 Master (Has Memcache for sessions) - 1 Slave (Has Redis for caching)
The deadlock's is less on Magento 1.8 than 1.7 but the still appear from time to time ..
Any one has some good ideas on how to get pass this problem.
Heres some data from SHOW ENGINE INNODB STATUS;
LATEST DETECTED DEADLOCK
130930 12:03:35
* (1) TRANSACTION:
TRANSACTION 918EEC3B, ACTIVE 37 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 41 lock struct(s), heap size 6960, 50 row lock(s), undo log entries 6
MySQL thread id 51899, OS thread handle 0x7f9774169700, query id 2583719 xxx.xx.xxx.47 dbxxx Updating
UPDATE m17_index_process SET started_at = '2013-09-30 10:03:36' WHERE (process_id='8')
* (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 594 page no 3 n bits 208 index PRIMARY of table xxx.xx.xxx.47 dbxxx.m17_index_process trx id 918EEC3B lock_mode X locks rec but not gap waiting
* (2) TRANSACTION:
TRANSACTION 918EE3E7, ACTIVE 72 sec starting index read
mysql tables in use 1, locked 1
680 lock struct(s), heap size 80312, 150043 row lock(s), undo log entries 294
MySQL thread id 51642, OS thread handle 0x7f8a336c7700, query id 2586254 xxx.xx.xxx.47 dbxxx Updating
UPDATE m17_index_process SET started_at = '2013-09-30 10:03:40' WHERE (process_id='8')
(2) HOLDS THE LOCK(S):
RECORD LOCKS space id 594 page no 3 n bits 208 index PRIMARY of table dbxxx.m17_index_process trx id 918EE3E7 lock mode S locks rec but not gap
(2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 594 page no 3 n bits 208 index PRIMARY of table dbxxx.m17_index_process trx id 918EE3E7 lock_mode X locks rec but not gap waiting
* WE ROLL BACK TRANSACTION (1)
Best Regards.
Rasmus
Seems deadlocks are due to indexing processes. Try disabling automatic indexes Magento - Programmatically Disable Automatic Indexing
and doing them manually.
Also try disabling cron for some time and check if issues reoccur.
Its possible that many store admins saving products from different stores. In that case product save may be causing deadlock with index processes.
Thanks

how to generate explain plan for entire stored procedure

I usually generate explain plans using the following in sqlplus:
SET AUTOTRACE ON
SET TIMING ON
SET TRIMSPOOL ON
SET LINES 200
SPOOL filename.txt
SET AUTOTRACE TRACEONLY;
{query goes here}
SPOOL OFF
SET AUTOTRACE OFF
But what If I want to generate explain plan for a stored procedure?
Is there a way to generate explain plan for the entire stored procedure? The SP has no input/output parameters.
What you are generating is correctly called an "execution plan". "Explain plan" is a command used to generate and view an execution plan, much as AUTOTRACE TRACEONLY does in your example.
By definition, an execution plan is for a single SQL statement. A PL/SQL block does not have an execution plan. If it contains one or more SQL statements, then each of those will have an execution plan.
One option is to manually extract the SQL statements from the PL/SQL code and use the process you've already shown.
Another option is to active SQL tracing then run the procedure. This will produce a trace file on the server that contains the execution plans for all statements executed in the session. The trace is in fairly raw form so it is generally easiest to format it using Oracle's TKPROF tool; there are also various third-party tools that process these trace files as well.
Hi I have done like below for the stored procedure:
SET AUTOTRACE ON
SET TIMING ON
SET TRIMSPOOL ON
SET LINES 200
SPOOL filename.txt
SET AUTOTRACE TRACEONLY;
#your stored procedure path
SPOOL OFF
SET AUTOTRACE OFF
And got the below statistics:
Statistics
-----------------------------------------------------------
6 CPU used by this session
8 CPU used when call started
53 DB time
6 Requests to/from client
188416 cell physical IO interconnect bytes
237 consistent gets
112 consistent gets - examination
237 consistent gets from cache
110 consistent gets from cache (fastpath)
2043 db block gets
1 db block gets direct
2042 db block gets from cache
567 db block gets from cache (fastpath)
27 enqueue releases
27 enqueue requests
4 messages sent
31 non-idle wait count
19 non-idle wait time
44 opened cursors cumulative
2 opened cursors current
22 physical read total IO requests
180224 physical read total bytes
1 physical write total IO requests
8192 physical write total bytes
1 pinned cursors current
461 recursive calls
4 recursive cpu usage
2280 session logical reads
1572864 session pga memory
19 user I/O wait time
9 user calls
1 user commits
No Errors.
Autotrace Disabled

Resources