Vertica identity column datatype change - vertica

In vertica I want to change the datatype of the identity column. For example
CREATE TABLE t1(x IDENTITY(1) ,y INT)
Is there a way to change the identity column incremental value from 1 to say 10000.
I create the above table now I increment the identity column x IDENTITY(1) to x IDENTITY(10000)
I tried below sql but it does not work
alter table t1 alter column x SET DATA TYPE IDENTITY ( 10000 );

From the documentation:
You cannot change the value of an IDENTITY column once the table exists.
The available parameters for IDENTITY are:
IDENTITY [ ( cache ) |
( start, increment[, cache ] ) ]

You'd better "create" your table again with new identity properties and after that copy data from last table to new one.

Related

Modify a nested table in Oracle

I have the below Nested table created :
create or replace TYPE access_t AS OBJECT (
AccessID VARCHAR2(50),
Eligibility char(1)
);
/
create or replace TYPE Access_tab IS TABLE OF access_t;
/
create or replace TYPE add_t AS OBJECT (
city VARCHAR2(100),
state VARCHAR2(100),
zip VARCHAR2(10),
APOINTSARRAY Access_tab )
;
/
create or replace TYPE add_tab IS TABLE OF add_t;
/
CREATE TABLE RQST_STATUS
( RQST_ID NUMBER,
ADDRESS add_tab
)
NESTED TABLE ADDRESS STORE AS RQST_STATUS_ADDRESS
( NESTED TABLE APOINTSARRAY STORE AS RQST_STATUS_AP)
;
If i need to change ADDRESS type to new_add_tab with some additional columns instead of add_tab , Can i just use ALTER TABLE .. MODIFY .. command ?
I am getting ORA-00922 or ORA-22913 errors . I cannot change the type directly because it is used somewhere else too. Also, the table is already loaded with data.
Please suggest.
You can do that but you have to alter the TYPE not the TABLE.
Check the documentation ALTER TYPE Statement: alter_method_spec
Most important is the CASCADE key word.
Examples:
ALTER TYPE access_t ADD ATTRIBUTE NEW_Eligibility INTEGER CASCADE;
ALTER TYPE access_t DROP ATTRIBUTE Eligibility CASCADE;
ALTER TYPE access_t MODIFY ATTRIBUTE AccessID VARCHAR2(100) CASCADE;
Here is a step-by-step description of my suggestion. It might not be the most elegant, but I think that it would be best for you to have something you can fully understand (as opposed to an obscure trick).
Also, and since I don't really know what kind of changes you need to do for the internal table, I'm leaving the maximal flexibility for you to do any change you may wish to do.
Let's call your table T1 that contains a columns C_T which is your internal table.
The internal table contains columns C_1, C-2 and C_3, and you want the new structure for the record to be D_1, D_2, D_3, D_4 and D_5, where the mapping is:
C_1 -> D_5,
C_2 -> D_1,
C_3 -> D_2,
{new} -> D_3,
{new} -> D_4.
Create a tempo table TEMPO_T with a column SOURCE_ROWID (varchar2(64)) and the new columns D_1,..., D5.
Write a small anonymous block having a cursor that selects the ROWID of each row of table T1 and all the records within the internal table in column C_T (order by ROWID). The result would look like (this is just an example of course):
ROWID C_1 C_2 C_2
wwereeedffff 1 a ww
wwereeedffff 2 b xx
wwereeedffff 7 l yy
ertrtrrrtrrr 5 d PP
ertrtrrrtrrr 99 h mm
...
[Note: The use of ROWID is under the assumption that you don't have a column that can serve as a unique identifier for each row in table T1; if there is such column - one defined as UNIQUE INDEX - you can use that field instead]
Having this query ready, convert it into an INSERT into the temporary table TEMPO_T along with whatever values you need to store for columns D_3 and D_4.
Now, you have a backup of the original contents of column C_T and hence can delete the column.
Now, you can update the type that defines the structure of column C_T to its new form (i.e. D_1,...,D_5) and alter table T1 by adding a column whose type is the updated one.
Finally, you can insert the contents of column C_T with that stored in the temporary table (since you already have this, I assume that you know how to implement it - inserting a table within a cell column of the outer table).
That's it.
Needless to say, I would make a backup of your data before engaging into this.
Hope this description is detailed enough to enable you to complete the task at hand.

How to alter column (Changing datasize), if table was created with partitions?

I have created table with a partition:
CREATE TABLE edw_src.pageviewlog_dev
(
accessurl character varying(1000),
msisdn character varying(1000),
customerid integer
)
WITH (
OIDS=FALSE
)
DISTRIBUTED BY (msisdn)
PARTITION BY RANGE(customerid)
(
PARTITION customerid START (0) END (200)
)
Now I want to change the datasize of accessurl from 1000 to 3000.I am not able to change the datasize,Whenever I am trying I am getting the error.
ERROR: "pageviewlog_dev_1_prt_customerid" is a member of a partitioning configurationHINT: Perform the operation on the master table.
I am able to change If I change the datatype from pg_attribute.If there any other way to change the datasize of existing column other than pg_attribute
I have found the Solution for the same .Sorry for the replying late .Below is the way to do ,whenever we face this kind of problem in "Post grel and greenplum"
UPDATE pg_attribute SET atttypmod = 300+4
WHERE attrelid = 'edw_src.ivs_hourly_applog_events'::regclass
AND attname = 'adtransactionid';
Greenplum isn't Postgresql so please don't confuse people by asking a Greenplum question with PostgreSQL in the title.
Don't modify catalog objects like pg_attribute. That will cause lots of problems and isn't supported.
The Admin Guide has the syntax for changing column datatypes and this is all you need to do:
ALTER TABLE edw_src.pageviewlog_dev
ALTER COLUMN accessurl TYPE character varying(3000);
Here is the working example with your table:
CREATE SCHEMA edw_src;
CREATE TABLE edw_src.pageviewlog_dev
(
accessurl character varying(1000),
msisdn character varying(1000),
customerid integer
)
WITH (
OIDS=FALSE
)
DISTRIBUTED BY (msisdn)
PARTITION BY RANGE(customerid)
(
PARTITION customerid START (0) END (200)
);
Output:
NOTICE: CREATE TABLE will create partition "pageviewlog_dev_1_prt_customerid" for table "pageviewlog_dev"
Query returned successfully with no result in 47 ms.
And now alter the table:
ALTER TABLE edw_src.pageviewlog_dev
ALTER COLUMN accessurl TYPE character varying(3000);
Output:
Query returned successfully with no result in 62 ms.
Proof in psql:
\d edw_src.pageviewlog_dev
Table "edw_src.pageviewlog_dev"
Column | Type | Modifiers
------------+-------------------------+-----------
accessurl | character varying(3000) |
msisdn | character varying(1000) |
customerid | integer |
Number of child tables: 1 (Use \d+ to list them.)
Distributed by: (msisdn)
If you are unable to alter the table it is probably because the catalog is corrupted after you updated pg_attribute directly. You can try dropping the table and recreating it or you can open a support ticket to have them attempt to correct the catalog corruption.

How to alter primary key column to autoincrement in derby using eclipse

I create a table and set one Integer column as a primary key.Now I want to set Auto Increment by 1 to that column.I am using this query but getting error.
ALTER TABLE APP.DocumentGroupCategory
ALTER ID INTEGER NOT NULL
GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1);
How can I alter column without drop table and create once again?

How to compare two Oracle schemas to get delta changes by Altering the table not to drop and recreate it

I've already tried out a tool named TOYS. I found it free but unfortunately it didn't work.
Then, I tried "RED-Gate Schema Compare for Oracle" but it uses the technique to drop and recreate the table mean while I need to just alter the table with the newly added/dropped columns.
Any help is highly appreciated
Thanks
Starting from Oracle 11g you could use dbms_metadata_diff package and specifically compare_alter() function to compare metadata of two schema objects:
Schema #1 HR
create table tb_test(
col number
)
Schema #2 HR2
create table tb_test(
col_1 number
)
select dbms_metadata_diff.compare_alter( 'TABLE' -- schema object type
, 'TB_TEST' -- object name
, 'TB_TEST' -- object name
, 'HR' -- by default current schema
, 'HR2'
) as res
from dual;
Result:
RES
-------------------------------------------------
ALTER TABLE "HR"."TB_TEST" ADD ("COL_1" NUMBER);
ALTER TABLE "HR"."TB_TEST" DROP ("COL");

insert row without set primary column

I have that query :
INSERT INTO GOST (ASSORTMENTID, ROZMIAR, GOST)
VALUES ( 54,'S','MjgwMzktODkgMTc0LTk2')
I want insert new row in table GOST, but I don't want to specify column with primary key - GOSTID. I want that database set next id value.
When I run this code I have that error:
validation error for column GOSTID, value "* null *"
I understand that I should set GOSTID column in INSERT query, yes ?
It is possible to run this without this parameter ?
I think a sample script worths more than 1000 words:
Go to a shell interface in the firebird server machine, cd to a folder where you have read/write permissions, start isql or isql-fb (depends on your system and firebird version) and run this script:
create database 'netmajor.fdb' user 'sysdba' password 'masterkey';
set autoddl off;
create table netmajor_example (
netmajor_id integer not null
, str_data varchar(200)
, int_data integer
, constraint pk_netmajor_example
primary key (netmajor_id)
);
create generator netmajor_gen;
set term ^;
create trigger netmajor_pkassign
for netmajor_example
active before insert position 1
AS
begin
if (new.netmajor_id is null) then
new.netmajor_id = gen_id(netmajor_gen, 1);
end
^
commit work^
set term ; ^
insert into netmajor_example (str_data, int_data) values ('one', 1);
insert into netmajor_example (str_data, int_data) values ('twenty', 20);
commit work;
select * from netmajor_example;
Take a look at the results, which in my machine are:
; NETMAJOR_ID STR_DATA INT_DATA
;============ ============================ ============
; 1 one 1
; 2 twenty 20
IF you have questions, don't hesitate to contact. Best regards.
Obviously, your primary key is a NOT NULL column, which means, it's always required. You cannot insert a row without giving a value for the primary key (unless it were an "auto-number" column which gets automatically set by the database system).
Use "before insert" trigger to set value for primary key. Firebird doesn't have "auto-increment" field type, so you need take care of it by yourself.
See http://www.firebirdfaq.org/faq29/ for tutorial how to do this. Some DB applications (eg Database Workbench) can create the trigger and generator automatically.

Resources