Can any one give me the syntax to truncate a table in IBM DB2.
I m running the following command: truncate table tableName immediate;
The eror is DB2
SQLCODE=-104, SQLSTATE=42601, SQLERRMC=table;truncate ;JOIN , DRIVER=3.50.152
Message: An unexpected token "table" was found following "truncate ". Expected tokens may include: "JOIN ".. SQLCODE=-104, SQLSTATE=42601, DRIVER=3.50.152
The syntax matches the one specified in the reference docs of IBM : http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=/com.ibm.db29.doc.sqlref/db2z_sql_truncate.htm
There is a great article about truncating, here is the Gist of DB2 stuff
Almost follows the standard.(since version 9.7)
DB2 requires that the IMMEDIATE keyword be added the the ordinary TRUNCATE TABLE statement, e.g.:
TRUNCATE TABLE someschema.sometable IMMEDIATE
TRUNCATE TABLE must be the first statement in a transaction. A transaction starting with TRUNCATE TABLE may include other statements, but if the transaction is rolled back, the TRUNCATE TABLE operation is not undone.
DB2s TRUNCATE TABLE operation has a number of optional arguments, see the documentation for more on this; especially, the REUSE STORAGE argument may be important for ad-hoc DBA tasks.
In DB2 versions < 9.7, you may abuse the IMPORT statement. Unfortunately, you need to know which operating system the command is executed from for this to work:
On unix-like systems:
IMPORT FROM /dev/null OF DEL REPLACE INTO tablename
On Windows:
IMPORT FROM NUL OF DEL REPLACE INTO tablename
IMPORT cannot be abused in all contexts. E.g., when working with dynamic SQL (from Java/.NET/PHP/...—not using the db2 command line processor), you need to wrap the IMPORT command in a call to ADMIN_CMD, e.g.:
CALL ADMIN_CMD('IMPORT FROM /dev/null OF DEL REPLACE INTO tablename')
IMPORT seems to be allowed in a transaction involving other operations, however it implies an immediate COMMIT operation.
The ALTER TABLE command may also be abused to quickly empty a table, but it requires more privileges, and may cause trouble with rollforward recovery.
This was taken from the website:
http://troels.arvin.dk/db/rdbms/#bulk-truncate_table-db2
If you are using DB2 for AS400, IMMEDIATE TRUNCATE TABLE will NOT work. The equivallent work around is to either:
DELETE FROM [tableName] then if it is an auto increment equivalant column, run:
ALTER TABLE ALTER COLUMN RESTART WITH 1
OR the faster (most efficient way)
Pass a command to the system to clear out the Physical File
Java syntax:
CommandCall command = new CommandCall(new AS400(AS400SystemName, AS400JavaUser, AS400JavaPwd));
try {
command.run("CLRPFM FILE(as400SchemaName/" + tableName + ")");
Which version of DB2 are you using? The truncate table command was introduced in DB2 v9 (at least on the mainframe, which appears to be what you're asking about based on the link).
You may have to resort to the delete from option although this article gives a stored procedure way of doing it in DB2 v8.
use truncate 'table_name' immediate
This is the exact reference documentation available for TRUNCATE in DB2 from 9.7 version
DB2 Reference for TRUNCATE
DB2 on z/OS V10
Empty one of your tables: truncate table; followed by commit work. Ex. truncate temp;
Someone else table: truncate owner.table Ex: truncate student.work ;
I have not tried this on a linked DB2. I do not know if truncate node2.student.work; is good.
SQL for creating list of tables automatically. Substring (substr) used because column width for table name and creator are sooo long. Your values may be different.
select 'truncate table '||substr(creator,1,9)||'.'||substr(name,1,20)
from sysibm.systables
where creator = 'Student';
in Java make sure it's the first statement in the transaction
Related
alter table tablename rename column zl_divn_nbr to div_loc_nbr;
Error while executing the above statement. Please help.
SQL Error: ORA-54032: column to be renamed is used in a virtual column expression
54032. 0000 - "column to be renamed is used in a virtual column expression"
*Cause: Attempted to rename a column that was used in a virtual column
expression.
*Action: Drop the virtual column first or change the virtual column
expression to eliminate dependency on the column to be renamed
Run the following SQL query in your database using the table name mentioned in the error message. For example, in the error message shown in this article, the table name is 'tablename'. Note that whilst the table name appears in lower case in the error message, it may be upper case in your DB. This query is case sensitive so if you receive no results, check whether the table name is upper case inside your database.
SELECT COLUMN_NAME, DATA_DEFAULT, HIDDEN_COLUMN
FROM USER_TAB_COLS
WHERE TABLE_NAME = 'tablename';
Before proceeding, make sure the Bitbucket Server process is not running. If Extended Statistics has been enabled, contact your database administrator to have them drop the Extended Statistics metadata from the table, and proceed with your upgrade. If you wish to enable Extended Statistics again after the upgrade you may do so, however be aware that you may need to repeat this process again for subsequent upgrades otherwise you risk running into this issue again.
Removing columns created by Extended Statistics requires using an in-build stored procedure,
DBMS_STATS.DROP_EXTENDED_STATS().
Usage of this stored procedure is covered further in ORA-54033 and the Hidden Virtual Column Mystery, and looks similar to the following:
EXEC DBMS_STATS.DROP_EXTENDED_STATS(ownname=>'<YOUR_DB_USERNAME>', tabname=>'tablename', extension=>'("PR_ROLE", "USER_ID", "PR_APPROVED")')
References
Database Upgrade Eror: column to be rename
Thanks.
Probably, you have such a table :
CREATE TABLE tablename(
id NUMBER,
zl_divn_nbr NUMBER,
zl_divn_percent NUMBER GENERATED ALWAYS AS (ROUND(zl_divn_nbr/100,2)) VIRTUAL
);
where zl_divn_nbr column is used for a computation for virtual(zl_divn_percent) column.
To rename zl_divn_nbr, all referenced virtual columns to this column should be removed, and may be created later.
The syntax for defining a virtual column is this :
column_name [datatype] [GENERATED ALWAYS] AS (expression) [VIRTUAL]
Since version 11 R1, we have this property.
ALTER TABLE rename column to
In the case of tables with virtual or 'group extension columns' the above
statement returns an error before Oracle 12cR2. For Oracle 12cR2 or newer versions the above statement runs fine cause 'renaming column' command is decoupled from the group extension aspect.
I'm using an external tool that scans tables in my database. It uses dba_objects.last_ddl_time to determine which tables have been scanned. Obviously, this strategy does not work if the table data is modified in between scans so sometimes I have to help it...
I need a way to "bump" the Last DDL time without actually changing anything.
I'm looking for the simplest possible instant DDL statement that can be executed on any table, knowing just the table name.
I have sysdba privileges.
Edit:
For example, I can use comment on table xxx is 'Boom'; but then I lose the original comment. I know how to fix this, but then it is no longer an small and easy statement I can quickly time in sql*plus
Changing LOGGING/NOLOGGING is pretty fast (though not instant).
If you set the LOGGING attribute back to itself, it will notch the LAST_DDL_TIME without making any real change to the table. This example below tries to touch every table except sys tabels (presumably you'd want more limits here)
BEGIN
FOR TABLE_POINTER IN (SELECT OWNER, TABLE_NAME, DECODE(LOGGING,'YES','LOGGING','NOLOGGING') DO_LOGGING
FROM DBA_TABLES WHERE OWNER NOT IN ('SYSTEM','SYS','SYSBACKUP','MDSYS' --etc. other restrictions here
))
LOOP
EXECUTE IMMEDIATE UTL_LMS.FORMAT_MESSAGE('ALTER TABLE %s.%s %s',TABLE_POINTER.OWNER, TABLE_POINTER.TABLE_NAME, TABLE_POINTER.DO_LOGGING);
END LOOP;
END;
/
EDIT: The above wouldn't work with temp tables. An alternative such as setting PCT_FREE to itself or another suitable attribute may be preferable. You may need to handle IOTs, Partitioned Tables, etc. differently than the rest of the tables as well.
I am dealing with a table in hive which does not have partitions and with input format as textinputformat. This is not an external table and I create it using "Create table as select" template.
I use the alter table statement to rename the table as given below:
ALTER TABLE testdb.temptable RENAME TO testdb.newtable;
I get the following error:
Error: Error while compiling statement: FAILED: ParseException line 1:32 mismatched input 'RENAME' expecting KW_EXCHANGE near 'temptable' in alter exchange partition (state=42000,code=40000)
Closing: org.apache.hive.jdbc.HiveConnection
I see it is a bug in hive. I use the version:
Hive 0.12.0-cdh5.1.4
How do i go about fixing this issue. Thanks in advance for the help!
It's not exactly a bug, just a side effect of Open Source when it's done by a motley crew of people all around the world with no "product owner" and no incentives to use a common programming style (or run extensive regression tests, or <insert your complaint here>).
Aaaaaaah, now that it's said, I feel better... Let's get to the point.
In HiveQL the alter command does not use the same semantics as create or select; specifically, you cannot use the "ALTER DATABASE.TABLE" notation. If you try, then the HQL parser just fails with a queer error message, as you can see by yourself.
That's the way it is. You must type a use command first, then your alter command with just the table name. Yes, it sucks. But that's the way it is. And I see no reason why it should improve any time soon.
[Update Jun-2017] looks like ALTER finally supports the DB.TABLE syntax, on recent Cloudera distro (tested on CDH 5.10 with Hive 1.1.0 - but since they usually include a number of back-ports in their distro, maybe it's a feature of Hive 1.2+)
I have similar error message it is gone after using alternative syntax, selecting schema and reference table by the short name:
USE mydb;
ALTER TABLE mytable RECOVER PARTITIONS;
I have a place.file text file;
place.file
New Hampshire
New Jersey
New Mexico
Nevada
New York
Ohio
Oklahoma
....
There are 4000 place names in this file. I will match my my_place table in oracle and place.file . So I want to insert the place.file into the Oracle . Maybe I should use bulk insert, how can I do bulk insert ?
You can use SQL Loader from Oracle.
The syntax is:
sqlldr *connection_string* control=*control_file.ctl*
The control file contains:
LOAD DATA
INFILE names.file
INTO TABLE <table_name>
FIELDS TERMINATED BY <delimiter>
OPTIONALLY ENCLOSED BY <enclosing character>
(<column_name>[, <column_name>, <column_name>])
No mention of an Oracle version. (For the best possible answer, always include Oracle version, Oracle edition, OS, and OS version.)
However, you should investigate using an external table for this purpose. Once you have that set up correctly, you can do:
insert into db_table select ... from external_table;
Optionally, you could use the APPEND hint on the INSERT statement, to use direct load.
Also,optionally, you could set the NOLOGGING attribute on the table you're loading the data into, for best performance. But, consider the recovery implications before you enable NOLOGGING.
Hope that helps,
-Mark
Say you generate ddl to create all your db tables etc via Hibernate SchemaExport etc. What you get is a script which starts with drop statements at the beginning. Not a problem, as I want this. But running this script produces a crapload of ORA-00942 errors running on an Oracle db.
Since they're not really errors if the tables just didn't exist yet, I'd like my create script to be error free when it executes so it's easy to determine what (if any) failed.
What are my options? I DO want drop statements generated since tables may or may not exist yet, but I don't want a million ORA-s coming back at me that I have to check (to determine if they're actual errors) just because it couldn't drop a table that's brand new.
"Say you generate ddl to create all
your db tables etc via Hibernate
SchemaExport etc. What you get is a
script which starts with drop
statements at the beginning. Not a
problem, as I want this. But running
this script produces a crapload of
ORA-00942 errors running on an Oracle
db."
Ideally we should maintain our schema properly, using source control and configuration management best practices. In this scenario we know beforehand whether the schema we run our scripts against contains those tables. We don't get errors because we don't attempt to drop tables which don't exist.
However it is not always possible to do this. One alternate approach is to have two scripts. The first script just has the DROP TABLE statements, prefaced with a friendly
PROMPT It is safe to ignore any ORA-00942 errors in the following statements
The second script has all the CREATE TABLE statements and leads off with
PROMPT All the statements in this script should succeed. So investigate any errors
Another option is to use the data dictionary:
begin
for r in ( select table_name from user_tables )
loop
execute immediate 'drop table '||r.table_name
||' cascade constraints';
end loop;
end;
Be careful with this one. It is the nuclear option and will drop every table in your schema.
If you get a script of drop statements, and Hibernate won't do it for you then wrap the DROP TABLE statements in an IF to test if the table exists before dropping it:
IF EXISTS(SELECT NULL
FROM TABLE_XYZ) THEN
DROP TABLE TABLE_XYZ;
END IF;