How to repair or ignore corrupted blocks in oracle in noarchivelog mode - oracle

I have a table with partitioned by date. Today when I run query for entire month I get following error:
SQL Error [1578] [72000]: ORA-01578: ORACLE data block corrupted (file # 10, block # 19007437)
ORA-01110: data file 10: '\UDR''
unfortunately all this long the database was in no archive log mode as now I know. So further investigating I found out that this block is in partition 9 of the table which holds 9th of Feb data.
So how can I recover from this issue? I try to to validate the blocks from rman and then try to recover it, but I get message that there is no back up for that since my db was in no archivelog mode
any help to either ignore it while running query or if I drop the data of 9th feb and reload them will that fix the issue?
when I do
select * from v$database_block_corruption
file | block# | blocks | corruption_change# | corruption_type |cond_id
10 | 1 | 19007437| 0 | fractured | 0
when I select the block from dba_extents then I see
segment_name | segment_type | block_id
tablename | partitionname | 19007437

Related

Hive "insert into" doesnt add values

Im new to hadoop etc.
Connect via beeline to hiveserver2. Then I create table:
create table test02(id int, name string);
Table creates and I try to insert values:
insert into test02(id, name) values (1, "user1");
And nothing happens. table02 and values__tmp__table__1 are created but they are both empty.
Hadoop directory "/user/$username/warehouse/test01" is empty to.
0: jdbc:hive2://localhost:10000> insert into test02 values (1,"user1");
No rows affected (2.284 seconds)
0: jdbc:hive2://localhost:10000> select * from test02;
+------------+--------------+
| test02.id | test02.name |
+------------+--------------+
+------------+--------------+
No rows selected (0.326 seconds)
0: jdbc:hive2://localhost:10000> show tables;
+------------------------+
| tab_name |
+------------------------+
| test02 |
| values__tmp__table__1 |
+------------------------+
2 rows selected (0.137 seconds)
Temp tables like these are created when hive needs to manage intermediate data during an operation. Hive automatically deletes all temporary tables at the end of the Hive session in which they are created. If you close the session and open it again, you won't find the temp table.
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.5.0/bk_data-access/content/temp-tables.html
Insert data like this ->
insert into test02 values (999, "user_new");
Data would be inserted into test02 and a temp table like values__tmp__table__1 (temp table will gone after the hive session).
I found a solution. I'm new to Hadoop&co, so the answer was not obvious to me.
First, I turned Hive logging to level ERROR to see the problem:
Find hive-exec-log4j2.properties ({your hive directory}/conf/)
Find property.hive.log.level and set the value to ERROR (..log.level = ERROR)
Then, while executing the command insert into via Beeline, I saw all of the errors. The main error was:
There are 0 datanode(s) running and no node(s) are excluded in this operation
I found the same question elsewhere. The top answer helped me, which was to delete all /tmp/* files (which stored all of my local HDFS data).
Then, like the first time, I initialized namenode (-format) and Hive (ran my metahive script).
The problem was solved—though it did expose another issue, which I'll need to look into: the insert into executes in 25+ seconds.

Impala via JDBC: retrieve number of dropped partitions

I am dropping multiple partitions of an Impala table via
ALTER TABLE foobar DROP IF EXISTS PARTITION (pkey='foo' OR pkey='bar');
When using impala-shell I am presented a result telling me how many partitions were actually dropped:
Starting Impala Shell without Kerberos authentication
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v3.2.0-cdh6.3.2 (1bb9836) built on Fri Nov 8 07:22:06 PST 2019)
The SET command shows the current value of all shell and query options.
***********************************************************************************
Opened TCP connection to impala:21000
Connected to impala:21000
Server version: impalad version 3.2.0-cdh6.3.2 RELEASE (build 1bb9836227301b839a32c6bc230e35439d5984ac)
[impala:21000] default> use my_schema;
Query: use my_schema
[impala:21000] my_schema> ALTER TABLE FOOBAR DROP IF EXISTS PARTITION (pkey='foo' OR pkey='bar');
Query: ALTER TABLE FOOBAR DROP IF EXISTS PARTITION (pkey='foo' OR pkey='bar')
+-------------------------+
| summary |
+-------------------------+
| Dropped 1 partition(s). |
+-------------------------+
Fetched 1 row(s) in 0.13s
Now, in our productive code, we are stuck using only JDBC. When executing the same DDL statement via JDBC, for my Statement st I have st.getResultSet() == null and st.getUpdateCount() == -1
Is there a way to retrive the number of dropped partitions via JDBC only?

Can not contact a hive table partition, after delete hdfs file related to partition

My Hadoop Cluster works batch job for every data at 11:00.
The job creates hive table partition(ex. p_date=201702,p_domain=0) and import rdbms data to the hive table partition like ETL....(hive table is not external table)
but the job has failed, and i removed some hdfs file(the partition location => p_date=20170228,p_domain=0) for reprocess.
It is my mistake, i just a typing query for drop partition at beeline...
And i contact a hang when i query this way "select * from table_name where p_date=20170228,p_domain=0", But "select * from table_name where p_date=20170228,p_domain=6" is success.
I can not find a error log and console message is not appear
How can i solve this problem?
And i hope you understand my lack of english.
You should not delete your partitions in Hive table in that way. There is a special command for doing this:
ALTER TABLE table_name DROP IF EXISTS PARTITION(partitioncolumn= 'somevalue');
Deleteing the files from HDFS is not sufficient. You need to clean the data from the metastore. For this you need to connect to you relational db and remove the data from partition-related table in MetaStore database.
mysql
mysql> use hive;
mysql> SELECT PART_ID PARTITIONS WHERE PART_NAME like '%p_date=20170228,p_domain=0%'
+---------+-------------+------------------+--------------------+-------+--------+
| PART_ID | CREATE_TIME | LAST_ACCESS_TIME | PART_NAME | SD_ID | TBL_ID |
+---------+-------------+------------------+--------------------+-------+--------+
| 7 | 1487237959 | 0 | partition name | 336 | 329 |
+---------+-------------+------------------+--------------------+-------+--------+
mysql> DELETE FROM PARTITIONS WHERE PART_ID=7;
mysql> DELETE FROM PARTITION_KEY_VALS WHERE PART_ID=7;
mysql> DELETE FROM PARTITION_PARAMS WHERE PART_ID=7;
After this Hive should stop using this partition in your queries.

Oracle - redo sequence number is different from oracle server's sequence number

I have an oracle database which has problems preventing it from opening.
To overcome the issues, I tried following steps:
First I mounted database:
SQL> startup mount;
ORA-32004: obsolete and/or deprecated parameter(s) specified
ORACLE instance started.
Total System Global Area 1.2560E+10 bytes
Fixed Size 2171344 bytes
Variable Size 6878662192 bytes
Database Buffers 5670699008 bytes
Redo Buffers 8601600 bytes
Database mounted.
After that, I recovered database as below:
SQL> recover database until cancel;
ORA-00279: change 338584095 generated at 11/22/2016 08:41:55 needed for thread 1
ORA-00289: suggestion : /oracle/app/product/11g/db/dbs/arch1_9218_833801667.dbf
ORA-00280: change 338584095 for thread 1 is in sequence #9218
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/oracle/app/oradata/ora11g/system01.dbf'
ORA-01112: media recovery not started
After this I tried to alter open database as below:
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/oracle/app/oradata/ora11g/system01.dbf'
and finally I tried recovering system01 datafile as below:
SQL> recover datafile 1;
ORA-00283: recovery session canceled due to errors
ORA-00314: log 2 of thread 1, expected sequence# 9218 doesn't match 9215
ORA-00312: online log 2 thread 1: '/oracle/app/oradata/ora11g/redo02.log'
as you can see in the final error "ORA-00314: log 2 of thread 1, expected sequence# 9218 doesn't match 9215" there is a sequence mismatch between the logfile redo02.log and the server.
How can this mismatch occur and what can I do to fix this?
PS: Since database cannot be opened, I cannot switch logfile and since redo02.log is the current logfile, I cannot drop or clean it.
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIME
------------- ------------------
1 1 0 52428800 1 NO UNUSED
338564041 22-NOV-16
3 1 0 52428800 1 NO UNUSED
338544000 22-NOV-16
2 1 9218 52428800 1 NO CURRENT
338584094 22-NOV-16

ERROR 1064 (42000) at line 38: check the manual that corresponds to your MySQL server version for the right syntax to use near '//

I am trying to run a mysql script on centos. I have following mysql installed.
mysql> SHOW VARIABLES LIKE "%version%";
+-------------------------+------------------------------------------------------+
| Variable_name | Value |
+-------------------------+------------------------------------------------------+
| innodb_version | 5.6.25-73.1 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 5.6.25-73.1 |
| version_comment | Percona Server (GPL) |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
+-------------------------+------------------------------------------------------+
My sample script looks like:
DELIMITER //
DROP TRIGGER IF EXISTS trg_table1_category_insert;
DROP TRIGGER IF EXISTS trg_table1_category_update;
CREATE TRIGGER trg_table1_category_insert
AFTER INSERT
ON
table1_category
FOR EACH ROW
BEGIN
insert into table1_category_history (
table1_category_history_id,
table1_id,
transaction_start_date
) values (
new.table1_category_id,
new.table1_id,
new.create_date
);
END;
//
CREATE TRIGGER trg_table1_category_update AFTER UPDATE on table1_category FOR EACH ROW
BEGIN
insert into table1_category_history (
table1_category_history_id,
table1_id,
transaction_start_date
) values (
new.table1_category_id,
new.table1_id,
new.create_date
);
END;
//
DELIMITER ;
My database utilizes utf8 encoding. While importing this file in database on mysql client it keeps throwing
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '//
CREATE TRIGGER trg_table1_category_update AFTER UPDATE on ta' at line 1
I do not see any syntax error while using the delimiter, also it works on some machines absolutely fine, i have googled almost 100s of links and tried all the ways with downgrading/upgrading mysql server/client , my.cnf, charset etc it is not helping me out. Can anyone please help me on this? Can there be any settings done at client level to interpret it correctly.I am using the same version client that comes with mysql server installation.

Resources