I want to know if there is a way to export oracle table and create cloudwatch graph out of it. We have 1 table which hold the records od connection to db from different servers, this refreshes every 5min, we want to display this as a graph in either cloudwatch or datadog. Please let me know if its possible and if it is then how can i approach it.
Thanks
Related
I have a Clickhouse db, for logs. I want to store last day of them. And I have some kind of mechanism which aggregates logs by app_name. It simply creates a table in my db for app and pushes logs in table related to this app. So the main question how I can specify TTL for every table which will be created in db
I have done this manualy by basic usage of ttl like this. But for whole db i can't find anything
You can't set TTL at a db level - either table or column level only https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl
You'll need to either schedule ALTER commands or modify your table creation logic.
Let's say there is a job A which executes a Python to connect to Oracle, fetch the data from Table A and load the data into Snowflake once a day. Application A dependent on Table A in Snowflake can just depend on the success of job A for further processing, this is easy.
But if the data movement is via Replication (Change Data Capture from Oracle moves to s3 using Golden Gate, pipes pushes into stage, stream to target using Task every few mins) - what is the best way to let Application A know that the data is ready? How to check if the data is ready? is there something available in Oracle, like a table level marker that can be moved over to Snowflake? Table's in Oracle cannot be modified to add anything new, marker rows also cannot be added - these are impractical. But something that Oracle provides implicitly, which can be moved over to Snowflake or some SCN like number at the table level that can be compared every few minutes could be a solution, eager to know any approaches.
We recently started the process of continuous migration (initial load + CDC) from an Oracle database on RDS to S3 using AWS DMS. The DB is using LogMiner.
the problem that we have detected is that the CDC records of type Update only contain the data that was updated, leaving the rest of the fields empty, so the possibility of simply taking as valid the record with the maximum timestamp value is lost.
Does anyone know if this can be changed or in what part of the DMS or RDS configuration to touch so that the update contains the information of all the fields of the record?
Thanks in advance.
Supplemental Logging at table level may increase what is logged, but that will also increase total volume of log data written for a given workload.
Many Log Based Data Replication products from various vendors require additional supplemental logging at the table level to ensure the full row data for updates with before and after change data is written to the database logs.
re: https://docs.oracle.com/database/121/SUTIL/GUID-D857AF96-AC24-4CA1-B620-8EA3DF30D72E.htm#SUTIL1582
Pulling data through LogMiner may be possible, but you will need to evaluate if it will scale with the data volumes you need.
DMS-FULL/CDC also supports Binary Reader better option to LogMiner. In order to capture updates WITH all the columns use "ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS" on Oracle side.
This will push all the columns in a update record to endpoint from Oracle RAC/non-RAC dbs. Also, a pointer for CDC is use TRANSACT_ID in DMS side to generate a unique sequence for each record. Redo will be little more but, it is what it is; you can keep an eye on it and DROP the supplemental logging if require at the table level.
Cheers!
I have a very sensitive application using Oracle 12c, where I need to send a notification to other system whenever there is insert/update on a table. Ways I know to achieve is by
polling the table in regular intervals.
putting a trigger on the table for insert/update. In both the cases I am worried about the additional load on the database.
Replicate the data to another database with GoldenGate and continuously poll from it, so that I don't have to worry about the overhead.
not sure about materialized view.. can it be lightweight if refreshed every 1-2 seconds ?
Is there an programmatic alternative anyone can suggest(lightweight).
I’m brand new to big data echo system but I have good SQL knowledge and I have worked only in relational databases. There is a scenario in my case. We have a table in Hive which records error details from the log. My requirement is whenever data is inserted into the error log table system, it should trigger an alert mail. I’m looking for a kind of “database trigger” . I know a trigger is not possible in a Hive table since it is a warehouse table. My question is: Is there any workaround to achieve this?
I propose that you use rather elasticsearch for your need, with watcher or xpack you generate alerts. Hive here is not the good technoligy for your needs