In V$LOGMNR_CONTENTS dictionary view the TIMESTAMP and COMMIT_TIMESTAMP columns are of DATE data type - without any timezone information. So which timezone are they in - database timezone, host timezone, or UTC? Is there a database parameter to configure their timezone?
I guess it is the time zone of database server's operating system. Simply because SYSDATE which might be used for insert is also returned in the time zone of database server's operating system.
Perhaps Oracle uses DATE data type instead of TIMESTAMP data type for historical reasons. I don't know when TIMESTAMP was introduced but certainly DATE came earlier.
When a SELECT statement is executed against the V$LOGMNR_CONTENTS view, the archive redo log files are read sequentially. These archive redo log files are the ones present into the archive log destination. Translated records from the redo log files are returned as rows in this view. This continues until either the filter criteria specified at startup (EndTime or endScn) are met or the end of the archive log file is reached.
The field TIMESTAMP is the Timestamp when the database change was made. This timestamp corresponds to the SCN transformation SCN_TO_TIMESTAMP, so that for a given SCN you have a correspondent timestamp.
The field COMMIT_TIMESTAMP is the timestamp when the transaction was committed; only meaningful if the COMMITTED_DATA_ONLY option was chosen in a DBMS_LOGMNR.START_LOGMNR() invocation. As you know, querying the redo logs and archive logs require that you invoke this package in a log miner session.
Actually, Oracle uses sometimes DATE data types when it probably should use TIMESTAMP in a lot of different dictionary fields. Why ? I honestly don't know, it is the same when they use for some dictionary views owner, for others table_owner and for others owner_name.
The DBTIMEZONE is specified in the CREATE DATABASE statement, so in the moment you create the database. you can change the DBTIMEZONE by using ALTER DATABASE
alter database set time_zone = 'EST';
Keep in mind that altering the database time zone will only take effect after shutdown/startup, and it is not recommendable.
TIMESTAMP WITH TIME ZONE is a variant of TIMESTAMP that includes a time zone region name or time zone offset in its value. The time zone offset is the difference (in hours and minutes) between local time and UTC (Coordinated Universal Time, formerly Greenwich Mean Time).
Oracle Database normalizes all new TIMESTAMP WITH LOCAL TIME ZONE data to the time zone of the database when the data is stored on disk.Oracle Database does not automatically update existing data in the database to the new time zone. Therefore, you cannot reset the database time zone if there is any TIMESTAMP WITH LOCAL TIME ZONE data in the database. You must first delete or export the TIMESTAMP WITH LOCAL TIME ZONE data and then reset the database time zone. For this reason, Oracle does not encourage you to change the time zone of a database that contains data.
An example of my case: I have an Oracle Database in Azure ( where all the servers are using UTC ) In my case I chose to use UTC instead of using a different DBTIMEZONE. Then I created a function to transform any timestamp stored in any table to my time zone.
I wonder why you need to read the redo/archive logs, do you have to recover some lost transactions ? I hope the explanation is satisfactory, please don't hesitate to comment or ask whatever other doubts you may have.
Related
We have an issue when we copy data from oracle to ADLS using ADF(Azure Data Factory).
The oracle DB has tables with timestamp values at European timezone. We use azure data factory to copy the data in to ADLS. The Data Factory IR (Integration Runtime) is on an on-prem VM that is in US Eastern time zone.
The issue is - When we copy oracle table that has timestamp (but no timezone), the ADF copy activity automatically converts the timestamp value to US Eastern Timezone. But we don’t want this to happen, we want to ingest the data as it is in the source table.
Example:
Data in Oracle Table - 2020-03-04T00:00:00 ( this is in CET )
Data in ADLS - 2020-03-03T19:00:00.000+0000 ( above date got converted to US EST, since there is no timezon info in Oracle table, and its being interpreted as UTC by Spark (+0000))
Expected in ADLS - 2020-03-04T00:00:00.000+0000 (don't want timezone conversion)
Is there a way to enforce a timezone at oracle connection level in Azure Data Factory ?
We tried to set property in Oracle Linked service - connection parameters ( PFA) but this had no effect on the timezone, we still got it converted to EST.
TIME_ZONE='Europe\Madrid'
TIME_ZONE='CET'
Timestamp is internally converted to Datetime in ADF
Image source: MS document
Thus, In Mapping tab of copy activity, Change the datatype of source column and copy the data. Below is the approach to change type.
Click the JSON representation of the pipeline.
Edit the datatype in Json for column with timestamp to String (both in Source and sink).
Once pipeline is run, data is copied into sink as in source format.
Source:
Sink:
Im currently using oracle 11g. I had extracted data from the schema once at a specific date to do some cleansing process. Suppose that now i would want to extract again but only with new/updated data from the last date i extracted, is there anyway i could get it? unfortunately these data does not have any column that store last edited date.
i was wondering if Oracle would automatically store that type of info that i could check? perhaps any transaction log?
Thanks,
A Physal
One way would be to enable flashback and then you can do:
SELECT * FROM table1
MINUS
SELECT * FROM table1 AS OF TIMESTAMP TIMESTAMP '2018-01-01 00:00:00.000';
To get all the rows changed since 2018-01-01.
We created one external parquet table in hive, inserted the existing text file data into the external parquet table using insert overwrite.
but we did observe date from existing text file are not matching with parquet Files.
Data from to file
txt file date : 2003-09-06 00:00:00
parquet file date : 2003-09-06 04:00:00
Questions :
1) how we can resolve this issue.
2) why we are getting these discrepancy in data.
Even we faced a similar issue when we are sqooping the tables from sql server this is because of driver or jar issue.
when you are doing an insert overwrite try using cast for the date fields.
This should work let me know if you face any issues.
Thanks for your help..
using both beeline and impala query editor in Hue. to access the data stores in parquet table, with the timestamp issue occuring when you use impala query via Hue.
This is most likely related to a known difference in the way Hive and Impala handles timestamp values:
- when Hive stores a timestamp value into Parquet format, it converts local time into UTC time, and when it reads data out, it converts back to local time.
- Impala, however on the other hand, does no conversion when it reads the timestamp field, hence, UTC time is returned instead of local time.
If your servers are located in EST time zone, this can give an explanation for the +4h time offset as below:
- the timestamp 2003-09-06 00:00 in the example should be understood as EST EDT time (sept. 06 is daylight saving time, therefore UTC-4h time zone)
- +4h is added to the timestamp when stored by Hive
- the same offset is subtracted when it is read back by Hive, getting the correct value
- no correction is done when read back by Impala, thus showing 2003-09-06 04:00:00
I have a application (java - oracle 11g R2) which is running on a single time zone. Now it has to be run on multiple time zones. My requirement is, want to store the data in one time zone(say IST) irrespective of login timezones. But wants application to display date and time to user as per the respective Entities time zone. I will not be able to change all the oracle queries already written. Wants to attach user to location and on user login alter (oracle) session of the user to his local time zone.
Is this possible ?
The datatype TIMESTAMP WITH LOCAL TIME ZONE is time zone aware. It stores the data in the time zone of the database server. If a user connects from a different time zone, the stored value gets displayed in his local time zone.
To change DATE columns, you can use
ALTER TABLE mytable MODIFY d TIMESTAMP WITH LOCAL TIME ZONE;
The queries can should be inspected, for details see Migrating Oracle DATE columns to TIMESTAMP with timezone.
Use the Oracle DATE type, which stores dates independent of timezone.
Then, note that the Java Date class is also timezone-independent.
Done this way, dates only get timezones when you format them as strings to show them to the client.
I see this question so often that I recently wrote a blog post about it.
See it here.
I deleted many table's records accidentally on Oracle 10g, and I have no backup, noarchivelog mode, not open flashback.
Is it possible to restore data? If yes, how shall I do?
How long ago did you delete the data? If you deleted the data a little less than an hour ago, for example, can you run this query and see if the data is still in UNDO?
SELECT *
FROM some_table AS OF TIMESTAMP systimestamp - INTERVAL '1' hour