SQOOP NOLOGGING export task to an Oracle DB - oracle

I'm totally stumped as I'm trying to include NOLOGGING into my SQOOP export task to an Oracle Database from HIVE.
The SQOOP user guide -https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_nologging
says to use :-
Doraoop.nologging=true
I think I've added the below into my code correctly but it does not seem to work.
<property>
<name>Doraoop.nologging</name>
<value>true</value>
</property>
The script below runs but I'm not seeing any performance gains which makes me think it is not working.
<!-- Sqoop export of data from HDFS to OR Datalab -->
<action name="SQOOP_EXPORT" retry-max="2" retry-interval="5">
<sqoop xmlns="uri:oozie:sqoop-action:0.4">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapreduce.job.queuename</name>
<value>${yarn_queueName}</value>
</property>
<property>
<name>org.apache.sqoop.export.text.dump_data_on_error</name>
<value>true</value>
</property>
<property>
<name>hadoop.security.credential.provider.path</name>
<value>jceks://hdfs/user/lib/keystore.pswd</value>
</property>
<property>
<name>Doraoop.nologging</name>
<value>true</value>
</property>
<property>
<name>sqoop.export.records.per.statement</name>
<value>100000</value>
</property>
<property>
<name>sqoop.export.statements.per.transaction</name>
<value>10</value>
</property>
</configuration>
<arg>export</arg>
<arg>--connect</arg>
<arg>jdbc:oracle:thin:#*****test****:12345/DATALAND</arg>
<arg>--username</arg>
<arg>LANDING</arg>
<arg>--password-alias</arg>
<arg>pswd.ordl</arg>
<arg>--export-dir</arg>
<arg>${sqoopHDFSDataDir}</arg>
<arg>--table</arg>
<arg>${sqoopDataTable}</arg>
<arg>--columns</arg>
<arg>${sqoopDataColumns}</arg>
<arg>--input-fields-terminated-by</arg>
<arg>\001</arg>
<arg>--input-lines-terminated-by</arg>
<arg>\n</arg>
<arg>--input-null-string</arg>
<arg>\\N</arg>
<arg>--input-null-non-string</arg>
<arg>\\N</arg>
<arg>-m</arg>
<arg>${sqoopNumMappers}</arg>
</sqoop>
<ok to="HIVE2_LOG_SCRIPT"/>
<error to="Email_failure"/>
</action>

Nologging in Oracle doesn't always mean "no logging". It only applies to certain direct write imports in specific situations, and doesn't necessarily mean that no transaction logs are recorded even then. The nologging option can also be overridden by database or tablespace settings that force logging to always occur (likely the case in any production database).
Also beware that nologging operations affect your ability to recover from backups: always perform backups before and immediately after any nologging operation, and do not allow other logging transactions to take place while nologging operations are in progress.
From the documentation:
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/logging_clause.html#GUID-C4212274-5595-4045-A599-F033772C496E
"If you specify NOLOGGING, then the creation of a database object, as
well as subsequent conventional inserts, will be logged in the redo
log file. Direct-path inserts will not be logged...
If the object for which you are specifying the logging attributes
resides in a database or tablespace in force logging mode, then Oracle
Database ignores any NOLOGGING setting until the database or
tablespace is taken out of force logging mode...
NOLOGGING is supported in only a subset of the locations that support
LOGGING. Only the following operations support the NOLOGGING mode:
DML:
Direct-path INSERT (serial or parallel) resulting either from an INSERT or a MERGE statement. NOLOGGING is not applicable to any
UPDATE operations resulting from the MERGE statement.
Direct Loader (SQL*Loader)
DDL:
CREATE TABLE ... AS SELECT (In NOLOGGING mode, the creation of the table will be logged, but direct-path inserts will not be logged.)
CREATE TABLE ... LOB_storage_clause ... LOB_parameters ... CACHE | NOCACHE | CACHE READS
ALTER TABLE ... LOB_storage_clause ... LOB_parameters ... CACHE | NOCACHE | CACHE READS (to specify logging of newly created LOB
columns)
ALTER TABLE ... modify_LOB_storage_clause ... modify_LOB_parameters ... CACHE | NOCACHE | CACHE READS (to change logging of existing LOB
columns)
ALTER TABLE ... MOVE
ALTER TABLE ... (all partition operations that involve data movement)
ALTER TABLE ... ADD PARTITION (hash partition only)
ALTER TABLE ... MERGE PARTITIONS
ALTER TABLE ... SPLIT PARTITION
ALTER TABLE ... MOVE PARTITION
ALTER TABLE ... MODIFY PARTITION ... ADD SUBPARTITION
ALTER TABLE ... MODIFY PARTITION ... COALESCE SUBPARTITION
CREATE INDEX
ALTER INDEX ... REBUILD
ALTER INDEX ... REBUILD [SUB]PARTITION
ALTER INDEX ... SPLIT PARTITION"
Also, see here: http://www.dba-oracle.com/t_nologging_append.htm

First of all pls ensure you have oraoop connector. Hive version > 1.4.6 has it.
Second, these -D commands are sensitive to position of the command. Can you pls run a simple sqooop statement with all your parameters and see what issues you are facing.
Ideal sqoop statement with all -D options will look like below
sqoop import -Dsqoop.export.records.per.statement=10000 -Dsqoop.export.statements.per.transaction=100 -Doraoop.nologging=true <rest of commands>

Related

Oracel DB Move huge size partition using transportable tablespace in shell script

we have a huge DB with daily 20Mil records,
We have an interval partitioned table on creation time column(filled with Sysdate at before insert trigger)
As this data are important and cannot be purged and storage runs out after some time, We have used impdp and expdp to archive old data(we keep few months) and monthly export one partition and import it at archive DB
Disadvantage of this scenario is Dropping exported partition after complete import, Does not completely free the storage, it seems our tablespaces are the problem
Another disadvantage is the data is keep growing and export and import time has reached more than mere hours and it's nearly 2 days, which affects our service quality
Newly we're thinking about using Transportable Tablespaces
I'm not what scenario to use here
Is this right to do this:
daily create a tablespace with a datafile
make new tablespace transportable
alter user and set it's default tablespace to our new one
after some time when data are old, export table using:
expdp transport_tablespaces={our new tablespace}
copy data file from source DB to destination DB
import at destination DB using:
impdp transport_datafiles='/path/to/data/{data file name}.dbf'
if anything went well, drop source partition and free the space
Personally, I'm not sure if my scenario is right, did I understand Transportable Tablespace correctly?
If my scenario is correct, Can you provide a shell-script to automate this to be done
First, there's no such a thing as 'make a tablespace transportable' in Oracle.
You can do what you outline, but, there are some modifications:
As each of your new tablespaces will host a partition, you cannot export it as such, you must exchange the partition with a table, with indexes, ... and then do the export.
you may run into the limit on the number of tablespaces you can create and manage, .... or the number of data files, ...unless you pay attention to what you're doing, ...
on the archive db, you'll have to import the tablespace, then do an exchange partition again. And depending on the number of partitions you need to keep, you may again run into the limit on the number of tablespaces. What you can do is to import the tablespace, do an alter table move to put the data from this tablespace into one that has other partitions, and the drop the imported tablespace.

delete or update operations is not working on hive 0.14

Anyone knows why the delete/update operation is not working in hive 0.14 (it's supposed to be working starting 0.14 version) even I do follow the steps/format to create table and get:
FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations.
upon running delete operation. Please help me on this.
CREATE TABLE STUDENT
(
STD_ID INT,
STD_NAME STRING,
AGE INT,
ADDRESS STRING
)
CLUSTERED BY (ADDRESS) into 3 buckets
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED as orc tblproperties('transactional'='true');
Delete and update works from 0.14 onwards. I was able to achieve the same.
You need to set new configuration parameters in Hive
hive.support.concurrency – true
hive.enforce.bucketing – true
hive.exec.dynamic.partition.mode – nonstrict
hive.txn.manager –org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
hive.compactor.initiator.on – true
hive.compactor.worker.threads – 1
Then create a table with ACID support,
CREATE TABLE STUDENT
(
STD_ID INT,
STD_NAME STRING,
AGE INT,
ADDRESS STRING
)
CLUSTERED BY (ADDRESS) into 3 buckets
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED as orc tblproperties('transactional'='true');
and do CURD opertaions.
update STUDENT
set AGE = 24
where STD_ID = 19;
Please follow these steps
1.Set the properties below in hive-site.xml
2.Create the table newly again
3.Load the data in to table
4.Try for CRUD operations. It will work. Good luck
<property>
<name>hive.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.enforce.bucketing</name>
<value>true</value>
</property>
<property>
<name>hive.exec.dynamic.partition.mode</name>
<value>nonstrict</value>
</property>
<property>
<name>hive.txn.manager</name>
<value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value>
</property>
<property>
<name>hive.compactor.initiator.on</name>
<value>true</value>
</property>
<property>
<name>hive.compactor.worker.threads</name>
<value>1</value>
</property>

Attempt to do update or delete using transaction manager that does not support these operations

While trying to update a data in Hive table in Cloudera Quickstart VM, I'm getting this error.
Error while compiling statement: FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations.
I added some changes in hive-site.xml file and also restarted the hive and cloudera.These are changes which I made in Hive-site.xml
hive.support.concurrency – true
hive.enforce.bucketing – true
hive.exec.dynamic.partition.mode – nonstrict
hive.txn.manager –org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
hive.compactor.initiator.on – true
hive.compactor.worker.threads – 1
I've tried with the configuration you provided in a hortonworks sandbox and I was able to do ACID operations on a table and I suppose it works also in Cloudera environment. Although there a some things to mention:
make sure hive has the properties you gave it (you can verify them in Hive CLI using SET command)
table that you work with must be bucketed, declared as ORC format and has in it's table properties 'transactional'='true' (hive support ACID operations only for ORC format and transactional tables). An example of a proper table is like this:
hive>create table testTableNew(id int ,name string ) clustered by (id) into 2 buckets stored as orc TBLPROPERTIES('transactional'='true');
You can follow this example.

Hive table locks

I have hive tables which are queried through queries in a file.
I had invoked an oozie workflow which invoked a hive action for mentioned file.
The job did not succeed and I killed the workflow.
But the tables are still shown as locked on Hive CLI. I am looking for a command/process that will release locks from Hive tables.
We can use the following query to release the lock
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager;
unlock table tablename;
if you use mysql as metastore, it will store table locks info in table HIVE_LOCKS, truncate it .
mysql> select * from HIVE_LOCKS;
Empty set (0.00 sec)
mysql>
To Check the locks on table (Run in Hive)-
show locks tablename extended;
To find the activity id for long running query - (You need to pass User from above query and can verify the Agent Info from first query with the application name in below query). Run outside hive
yarn application -list | grep User
To kill the activity id -
yarn application -kill activityid
I also met a similar problem in hive3, and i read the source code in org.apache.hadoop.hive.metastore.txn.TxnHandler, i found that there is a function called performTimeOuts(), which is scheduled periodically by a daemon thread called org.apache.hadoop.hive.metastore.txn.AcidHouseKeeperService.
That daemon thread will clean outdated lock infomation automatically in the mysql table hive.hive_locks, but it is not enabled by default, so we just need to configure it in hive-site.xml, like this:
<property>
<name>metastore.task.threads.always</name>
<value>org.apache.hadoop.hive.metastore.events.EventCleanerTask,org.apache.hadoop.hive.metastore.RuntimeStatsCleanerTask,org.apache.hadoop.hive.metastore.repl.DumpDirCleanerTask,org.apache.hadoop.hive.metastore.txn.AcidHouseKeeperService</value>
</property>

How to delete and update a record in Hive

I have installed Hadoop, Hive, Hive JDBC. which are running fine for me. But I still have a problem. How to delete or update a single record using Hive because delete or update command of MySQL is not working in Hive.
Thanks
hive> delete from student where id=1;
Usage: delete [FILE|JAR|ARCHIVE] <value> [<value>]*
Query returned non-zero code: 1, cause: null
As of Hive version 0.14.0: INSERT...VALUES, UPDATE, and DELETE are now available with full ACID support.
INSERT ... VALUES Syntax:
INSERT INTO TABLE tablename [PARTITION (partcol1[=val1], partcol2[=val2] ...)] VALUES values_row [, values_row ...]
Where values_row is:
( value [, value ...] )
where a value is either null or any valid SQL literal
UPDATE Syntax:
UPDATE tablename SET column = value [, column = value ...] [WHERE expression]
DELETE Syntax:
DELETE FROM tablename [WHERE expression]
Additionally, from the Hive Transactions doc:
If a table is to be used in ACID writes (insert, update, delete) then the table property "transactional" must be set on that table, starting with Hive 0.14.0. Without this value, inserts will be done in the old style; updates and deletes will be prohibited.
Hive DML reference:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML
Hive Transactions reference:
https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions
You should not think about Hive as a regular RDBMS, Hive is better suited for batch processing over very large sets of immutable data.
The following applies to versions prior to Hive 0.14, see the answer by ashtonium for later versions.
There is no operation supported for deletion or update of a particular record or particular set of records, and to me this is more a sign of a poor schema.
Here is what you can find in the official documentation:
Hadoop is a batch processing system and Hadoop jobs tend to have high latency and
incur substantial overheads in job submission and scheduling. As a result -
latency for Hive queries is generally very high (minutes) even when data sets
involved are very small (say a few hundred megabytes). As a result it cannot be
compared with systems such as Oracle where analyses are conducted on a
significantly smaller amount of data but the analyses proceed much more
iteratively with the response times between iterations being less than a few
minutes. Hive aims to provide acceptable (but not optimal) latency for
interactive data browsing, queries over small data sets or test queries.
Hive is not designed for online transaction processing and does not offer
real-time queries and row level updates. It is best used for batch jobs over
large sets of immutable data (like web logs).
A way to work around this limitation is to use partitions: I don't know what you id corresponds to, but if you're getting different batches of ids separately, you could redesign your table so that it is partitioned by id, and then you would be able to easily drop partitions for the ids you want to get rid of.
Yes, rightly said. Hive does not support UPDATE option.
But the following alternative could be used to achieve the result:
Update records in a partitioned Hive table:
The main table is assumed to be partitioned by some key.
Load the incremental data (the data to be updated) to a staging table partitioned with the same keys as the main table.
Join the two tables (main & staging tables) using a LEFT OUTER JOIN operation as below:
insert overwrite table main_table partition (c,d)
select t2.a, t2.b, t2.c,t2.d from staging_table t2 left outer join main_table t1 on t1.a=t2.a;
In the above example, the main_table & the staging_table are partitioned using the (c,d) keys. The tables are joined via a LEFT OUTER JOIN and the result is used to OVERWRITE the partitions in the main_table.
A similar approach could be used in the case of un-partitioned Hive table UPDATE operations too.
You can delete rows from a table using a workaround, in which you overwrite the table by the dataset you want left into the table as a result of your operation.
insert overwrite table your_table
select * from your_table
where id <> 1
;
The workaround is useful mostly for bulk deletions of easily identifiable rows. Also, obviously doing this can muck up your data, so a backup of the table is adviced and care when planning the "deletion" rule also adviced.
Once you have installed and configured Hive , create simple table :
hive>create table testTable(id int,name string)row format delimited fields terminated by ',';
Then, try to insert few rowsin test table.
hive>insert into table testTable values (1,'row1'),(2,'row2');
Now try to delete records , you just inserted in table.
hive>delete from testTable where id = 1;
Error!
FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations.
By default transactions are configured to be off. It is been said that update is not supported with the delete operation used in the conversion manager. To support update/delete , you must change following configuration.
cd $HIVE_HOME
vi conf/hive-site.xml
Add below properties to file
<property>
<name>hive.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.enforce.bucketing</name>
<value>true</value>
</property>
<property>
<name>hive.exec.dynamic.partition.mode</name>
<value>nonstrict</value>
</property>
<property>
<name>hive.txn.manager</name>
<value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value>
</property>
<property>
<name>hive.compactor.initiator.on</name>
<value>true</value>
</property>
<property>
<name>hive.compactor.worker.threads</name>
<value>2</value>
</property>
Restart the service and then try delete command again :
Error!
FAILED: LockException [Error 10280]: Error communicating with the metastore.
There is problem with metastore. In order to use insert/update/delete operation, You need to change following configuration in conf/hive-site.xml as feature is currently in development.
<property>
<name>hive.in.test</name>
<value>true</value>
</property>
Restart the service and then delete command again :
hive>delete from testTable where id = 1;
Error!
FAILED: SemanticException [Error 10297]: Attempt to do update or delete on table default.testTable that does not use an AcidOutputFormat or is not bucketed.
Only ORC file format is supported in this first release. The feature has been built such that transactions can be used by any storage format that can determine how updates or deletes apply to base records (basically, that has an explicit or implicit row id), but so far the integration work has only been done for ORC.
Tables must be bucketed to make use of these features. Tables in the same system not using transactions and ACID do not need to be bucketed.
See below built table example with ORCFileformat, bucket enabled and ('transactional'='true').
hive>create table testTableNew(id int ,name string ) clustered by (id) into 2 buckets stored as orc TBLPROPERTIES('transactional'='true');
Insert :
hive>insert into table testTableNew values (1,'row1'),(2,'row2'),(3,'row3');
Update :
hive>update testTableNew set name = 'updateRow2' where id = 2;
Delete :
hive>delete from testTableNew where id = 1;
Test :
hive>select * from testTableNew ;
Configuration Values to Set for INSERT, UPDATE, DELETE
In addition to the new parameters listed above, some existing parameters need to be set to support INSERT ... VALUES, UPDATE, and DELETE.
Configuration key
Must be set to
hive.support.concurrency true (default is false)
hive.enforce.bucketing true (default is false) (Not required as of Hive 2.0)
hive.exec.dynamic.partition.mode nonstrict (default is strict)
Configuration Values to Set for Compaction
If the data in your system is not owned by the Hive user (i.e., the user that the Hive metastore runs as), then Hive will need permission to run as the user who owns the data in order to perform compactions. If you have already set up HiveServer2 to impersonate users, then the only additional work to do is assure that Hive has the right to impersonate users from the host running the Hive metastore. This is done by adding the hostname to hadoop.proxyuser.hive.hosts in Hadoop's core-site.xml file. If you have not already done this, then you will need to configure Hive to act as a proxy user. This requires you to set up keytabs for the user running the Hive metastore and add hadoop.proxyuser.hive.hosts and hadoop.proxyuser.hive.groups to Hadoop's core-site.xml file. See the Hadoop documentation on secure mode for your version of Hadoop (e.g., for Hadoop 2.5.1 it is at Hadoop in Secure Mode).
The UPDATE statement has the following limitations:
The expression in the WHERE clause must be an expression supported by a Hive SELECT clause.
Partition and bucket columns cannot be updated.
Query vectorization is automatically disabled for UPDATE statements. However, updated tables can still be queried using vectorization.
Subqueries are not allowed on the right side of the SET statement.
The following example demonstrates the correct usage of this statement:
UPDATE students SET name = null WHERE gpa <= 1.0;
DELETE Statement
Use the DELETE statement to delete data already written to Apache Hive.
DELETE FROM tablename [WHERE expression];
The DELETE statement has the following limitation:
query vectorization is automatically disabled for the DELETE operation.
However, tables with deleted data can still be queried using vectorization.
The following example demonstrates the correct usage of this statement:
DELETE FROM students WHERE gpa <= 1,0;
The CLI told you where is your mistake : delete WHAT? from student ...
Delete : How to delete/truncate tables from Hadoop-Hive?
Update : Update , SET option in Hive
If you want to delete all records then as a workaround load an empty file into table in OVERWRITE mode
hive> LOAD DATA LOCAL INPATH '/root/hadoop/textfiles/empty.txt' OVERWRITE INTO TABLE employee;
Loading data to table default.employee
Table default.employee stats: [numFiles=1, numRows=0, totalSize=0, rawDataSize=0]
OK
Time taken: 0.19 seconds
hive> SELECT * FROM employee;
OK
Time taken: 0.052 seconds
Upcoming version of Hive is going to allow SET based update/delete handling which is of utmost importance when trying to do CRUD operations on a 'bunch' of rows instead of taking one row at a time.
In the interim , I have tried a dynamic partition based approach documented here http://linkd.in/1Fq3wdb .
Please see if it suits your need.
UPDATE or DELETE a record isn't allowed in Hive, but INSERT INTO is acceptable.
A snippet from Hadoop: The Definitive Guide(3rd edition):
Updates, transactions, and indexes are mainstays of traditional databases. Yet, until recently, these features have not been considered a part of Hive's feature set. This is because Hive was built to operate over HDFS data using MapReduce, where full-table scans are the norm and a table update is achieved by transforming the data into a new table. For a data warehousing application that runs over large portions of the dataset, this works well.
Hive doesn't support updates (or deletes), but it does support INSERT INTO, so it is possible to add new rows to an existing table.
To achieve your current need, you need to fire below query
> insert overwrite table student
> select *from student
> where id <> 1;
This will delete current table and create new table with same name with all rows except the rows that you want to exclude/delete
I tried this on Hive 1.2.1
There are few properties to set to make a Hive table support ACID properties and to support UPDATE ,INSERT ,and DELETE as in SQL
Conditions to create a ACID table in Hive.
1. The table should be stored as ORC file .Only ORC format can support ACID prpoperties for now
2. The table must be bucketed
Properties to set to create ACID table:
set hive.support.concurrency =true;
set hive.enforce.bucketing =true;
set hive.exec.dynamic.partition.mode =nonstrict
set hive.compactor.initiator.on = true;
set hive.compactor.worker.threads= 1;
set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
set the property hive.in.test to true in hive.site.xml
After setting all these properties , the table should be created with tblproperty 'transactional' ='true'. The table should be bucketed and saved as orc
CREATE TABLE table_name (col1 int,col2 string, col3 int) CLUSTERED BY col1 INTO 4
BUCKETS STORED AS orc tblproperties('transactional' ='true');
Now the Hive table can support UPDATE and DELETE queries
Delete has been recently added in Hive version 0.14
Deletes can only be performed on tables that support ACID
Below is the link from Apache .
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Delete
Good news,Insert updates and deletes are now possible on Hive/Impala using Kudu.
You need to use IMPALA/kudu to maintain the tables and perform insert/update/delete records.
Details with examples can be found here:
insert-update-delete-on-hadoop
Please share the news if you are excited.
-MIK
Recently I was looking to resolve a similar issue, Apache Hive, Hadoop do not support Update/Delete operations. So ?
So you have two ways:
Use a backup table: Save the whole table in a backup_table, then truncate your input table, then re-write only the data you are intrested to mantain.
Use Uber Hudi: It's a framework created by Uber to resolve the HDFS limitations including Deletion and Update. You can give a look in this link:
https://eng.uber.com/hoodie/
an example for point 1:
Create table bck_table like input_table;
Insert overwrite table bck_table
select * from input_table;
Truncate table input_table;
Insert overwrite table input_table
select * from bck_table where id <> 1;
NB: If the input_table is an external table you must follow the following link:
How to truncate a partitioned external table in hive?
If you want to perform Hive CRUD using ACID operations, you need check whether you have
hive 0.14 version or not
In order to perform CREATE, SELECT, UPDATE, DELETE, We have to ensure while creating the table with the following conditions
File format should be in ORC file format with
TBLPROPERTIES(‘transactional’=’true’)
Table should be CLUSTERED BY
with some Buckets, please refer the below CREATE TABLE statement.
You can use below query to create table with above properties-
CREATE TABLE STUDENT
(
STD_ID INT,
STD_NAME STRING,
AGE INT,
ADDRESS STRING
)
CLUSTERED BY (ADDRESS) into 3 buckets
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED as orc tblproperties('transactional'='true');

Resources