How to alter column (Changing datasize), if table was created with partitions? - ddl

I have created table with a partition:
CREATE TABLE edw_src.pageviewlog_dev
(
accessurl character varying(1000),
msisdn character varying(1000),
customerid integer
)
WITH (
OIDS=FALSE
)
DISTRIBUTED BY (msisdn)
PARTITION BY RANGE(customerid)
(
PARTITION customerid START (0) END (200)
)
Now I want to change the datasize of accessurl from 1000 to 3000.I am not able to change the datasize,Whenever I am trying I am getting the error.
ERROR: "pageviewlog_dev_1_prt_customerid" is a member of a partitioning configurationHINT: Perform the operation on the master table.
I am able to change If I change the datatype from pg_attribute.If there any other way to change the datasize of existing column other than pg_attribute
I have found the Solution for the same .Sorry for the replying late .Below is the way to do ,whenever we face this kind of problem in "Post grel and greenplum"
UPDATE pg_attribute SET atttypmod = 300+4
WHERE attrelid = 'edw_src.ivs_hourly_applog_events'::regclass
AND attname = 'adtransactionid';

Greenplum isn't Postgresql so please don't confuse people by asking a Greenplum question with PostgreSQL in the title.
Don't modify catalog objects like pg_attribute. That will cause lots of problems and isn't supported.
The Admin Guide has the syntax for changing column datatypes and this is all you need to do:
ALTER TABLE edw_src.pageviewlog_dev
ALTER COLUMN accessurl TYPE character varying(3000);
Here is the working example with your table:
CREATE SCHEMA edw_src;
CREATE TABLE edw_src.pageviewlog_dev
(
accessurl character varying(1000),
msisdn character varying(1000),
customerid integer
)
WITH (
OIDS=FALSE
)
DISTRIBUTED BY (msisdn)
PARTITION BY RANGE(customerid)
(
PARTITION customerid START (0) END (200)
);
Output:
NOTICE: CREATE TABLE will create partition "pageviewlog_dev_1_prt_customerid" for table "pageviewlog_dev"
Query returned successfully with no result in 47 ms.
And now alter the table:
ALTER TABLE edw_src.pageviewlog_dev
ALTER COLUMN accessurl TYPE character varying(3000);
Output:
Query returned successfully with no result in 62 ms.
Proof in psql:
\d edw_src.pageviewlog_dev
Table "edw_src.pageviewlog_dev"
Column | Type | Modifiers
------------+-------------------------+-----------
accessurl | character varying(3000) |
msisdn | character varying(1000) |
customerid | integer |
Number of child tables: 1 (Use \d+ to list them.)
Distributed by: (msisdn)
If you are unable to alter the table it is probably because the catalog is corrupted after you updated pg_attribute directly. You can try dropping the table and recreating it or you can open a support ticket to have them attempt to correct the catalog corruption.

Related

Replace hive table with partition

There is a Hive-table with 2 string columns one partition "cmd_out".
I'm trying to rename all 2 columns ('col1', 'col2'), by using Replace-function:
Alter table 'table_test' replace columns(
'col22' String,
'coll33' String
)
But I receive the following exception:
Partition column name 'cmd_out' conflicts with table columns.
When I include the partition column in query
Alter table 'table_test' replace columns(
'cmd_out' String,
'col22' String,
'coll33' String
)
I receive:
Duplicate column name cmd_out in the table definition
if you want to rename a column, you need to use alter table ... change.
Here is the syntax
alter table mytab change col1 new_col1 string;

TSQL: Fastest way to convert data_type and update it

My database performance skills are not really good - I could not find any 'good' Google result, so I need your help.
I am trying to convert all columns of a table. All data in this table are datatype varchar.
I do have a reference table which has wrong data but correct meta data like Column_Name, Data_Type etc. ==> So I try to use the table with the correct metadata to convert the table with the correct data. As in the following example, the dynamic script wants to convert a column that should actually be datetime:
IF #Datatype IN ('datetime')
Begin
set #sqlDate = ('
Update dbo.'+#Table+'
SET '+#Column+' = TRY_CONVERT( datetime, '+#Column+', 105)
Alter Table dbo.'+#Table+'
Alter Column '+#Column+' datetime;
')
exec (#sqlDate);
End
So my goal is to convert a Table like this :
+----------------+----------------+
| Col1 (varchar) | Col2 (varchar) |
+----------------+----------------+
| '01.01.2000' | '124.5' |
+----------------+----------------+
To this:
+-------------------------+--------------+
| Col1(datetime) | Col2 (float) |
+-------------------------+--------------+
| jjjj-mm-tt hh:mi:ss.mmm | 124.5 |
+-------------------------+--------------+
(based on the correct metadata table)
Do you think its better to first convert data into #TempTable and Update the original Column via the pre-converted #TempTable? Any better solution?
Thanks a lot!
Here's how I would do it.
First, create and populate sample table (Please save is this step in your future questions):
CREATE TABLE Sample
(
DateTimeColumn varchar(50),
FloatColumn varchar(50)
);
INSERT INTO Sample(DateTimeColumn, FloatColumn) VALUES ('01.01.2000', '124.5');
Then - Alter the table to add the columns with the correct data type.
ALTER TABLE Sample
ADD AsDateTime DateTime,
AsFloat float;
Populate the new columns:
UPDATE Sample
SET AsDateTime = TRY_CONVERT(datetime, DateTimeColumn, 105),
AsFloat = TRY_CAST(FloatColumn AS float);
At this point, you should pause and check if you really did get correct values.
Once the new columns data is verified, you can delete the old columns
ALTER TABLE Sample
DROP COLUMN DateTimeColumn;
ALTER TABLE Sample
DROP COLUMN FloatColumn;
and rename the new columns:
EXEC sp_rename 'dbo.Sample.AsDateTime', 'DateTimeColumn', 'COLUMN';
EXEC sp_rename 'dbo.Sample.AsFloat', 'FloatColumn', 'COLUMN';
A quick select to verify the change:
SELECT DateTimeColumn, FloatColumn
FROM Sample;
Results:
DateTimeColumn FloatColumn
2000-01-01 00:00:00 124.5

ERROR: Unsupported access to table with projection expressions or aggregates

I want to remove partitioning from table:
ALTER TABLE rosing_watch_sessions REMOVE PARTITIONING
but it raise error:
Severity: ROLLBACK,
Message: Unsupported access to table with projection expressions or aggregates,
Sqlstate: 0A000,
Routine: checkUnsupportedMaVeriCKTableError,
File: /scratch_a/release/16125/vbuild/vertica/Catalog/CatalogLookup.cpp,
Line: 1383
What does it mean this error message?
P.S.
Result of select export_objects('', 'rosing_watch_sessions'):
CREATE TABLE staging.rosing_watch_sessions
(
id IDENTITY ,
session_uid varchar(255) NOT NULL,
...
)
PARTITION BY (rosing_watch_sessions.requested_day);
ALTER TABLE staging.rosing_watch_sessions ADD CONSTRAINT C_PRIMARY PRIMARY KEY (id);
CREATE PROJECTION staging.rosing_watch_sessions_super /*+basename(rosing_watch_sessions),createtype(P)*/
(
id,
session_uid,
...
)
AS
SELECT rosing_watch_sessions.id,
rosing_watch_sessions.session_uid,
...
FROM staging.rosing_watch_sessions
ORDER BY rosing_watch_sessions.id
SEGMENTED BY hash(rosing_watch_sessions.requested_day) ALL NODES ;
CREATE PROJECTION staging.channel_coverage
(
resource_uid,
device_uid,
request_date,
num_requests,
__partition_key_value__ ENCODING RLE
)
AS
SELECT rosing_watch_sessions.resource_uid,
rosing_watch_sessions.device_uid,
date("timezone"('UTC'::varchar(3), rosing_watch_sessions.requested_at)) AS request_date,
count(rosing_watch_sessions.session_uid) AS num_requests,
max(rosing_watch_sessions.requested_day) AS __partition_key_value__
FROM staging.rosing_watch_sessions
GROUP BY rosing_watch_sessions.resource_uid,
rosing_watch_sessions.device_uid,
date("timezone"('UTC'::varchar(3), rosing_watch_sessions.requested_at))
;
SELECT MARK_DESIGN_KSAFE(0);
Live aggregate projections do not support certain operations (yet).
DROP PROJECTION staging.channel_coverage;
ALTER TABLE rosing_watch_sessions REMOVE PARTITIONING;
Then rebuild staging.channel_coverage using the DDL you have.

Why is the NLSSORT index not used for this query?

In our application we have case-insensitive semantics configured at session level:
alter session set NLS_COMP=LINGUISTIC;
alter session set NLS_SORT=BINARY_AI;
but then I want to have a table with a NAME column with binary semantics, so I defined an function-based index accordingly:
create table RAW_SCREEN (
ID number(10) constraint RSCR_PK primary key,
NAME nvarchar2(256) not null
);
create unique index RSCR_IDX on RAW_SCREEN (nlssort(NAME, 'NLS_SORT=BINARY'));
I would have expected the query below to take advantage of the function-based index:
select * from RAW_SCREEN where
nlssort(NAME, 'NLS_SORT=BINARY') = nlssort(N'raw_screen1', 'NLS_SORT=BINARY');
but it doesn't. The query plan shows a table scan. While experimenting, I've discovered that a simple index on NAME does the trick:
create unique index RSCR_IDX2 on RAW_SCREEN (NAME);
When running the query again, the RSCR_IDX2 index was used successfully.
Now, that is not very surprising, but I can't understand why the first function-based index was not used by the optimizer. The indexed expression matches exactly the expression used in the WHERE condition. Do you have any idea why it wasn't used?
NOTE: This was run on Oracle 10.2
Here's a full test script if you want to try it out:
alter session set NLS_COMP=LINGUISTIC;
alter session set NLS_SORT=BINARY_AI;
create table RAW_SCREEN (
ID number(10) constraint RSCR_PK primary key,
NAME nvarchar2(256) not null
);
create unique index RSCR_IDX on RAW_SCREEN (nlssort(NAME, 'NLS_SORT=BINARY'));
--create unique index RSCR_IDX2 on RAW_SCREEN (NAME);
begin
for i in 1..10000
loop
insert into RAW_SCREEN values (i, 'raw_screen' || i);
end loop;
end;
/
commit;
select * from RAW_SCREEN where nlssort(NAME, 'NLS_SORT=BINARY') = nlssort(N'raw_screen1000', 'NLS_SORT=BINARY');
Expressions are converted to NLS session settings in DML, but not in DDL.
This is arguably a bug with the behavior of NLSSORT(char, 'NLS_SORT=BINARY').
From the manual: "If you specify BINARY, then this function returns char."
But that is not true for the index. Normally it is very convenient that the index expression does not undergo any transformation; if it depended on session settings
than tools like DBMS_METADATA.GET_DDL would have to return many alter session statements. But in this case it means that you can create an index that will never
be used.
The explain plan shows the real expression. Here's how Oracle uses nlssort in a session without it being explicitly used:
alter session set nls_comp=linguistic;
alter session set nls_sort=binary_ai;
drop table raw_screen;
create table raw_screen (
id number(10) constraint rscr_pk primary key,
name nvarchar2(256) not null
);
create unique index idx_binary_ai
on raw_screen (nlssort(name, 'nls_sort=binary_ai'));
explain plan for select * from raw_screen where name = n'raw_screen1000';
select * from table(dbms_xplan.display(format=>'basic predicate'));
Plan hash value: 2639454581
-----------------------------------------------------
| Id | Operation | Name |
-----------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID| RAW_SCREEN |
|* 2 | INDEX UNIQUE SCAN | IDX_BINARY_AI |
-----------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(NLSSORT("NAME",'nls_sort=''BINARY_AI''')=HEXTORAW('0072006
10077005F00730063007200650065006E003100300030003000'))
This example shows that nlssort(char, 'nls_sort=binary') is dropped by the DML:
alter session set nls_comp=linguistic;
alter session set nls_sort=binary_ai;
drop table raw_screen;
create table raw_screen (
id number(10) constraint rscr_pk primary key,
name nvarchar2(256) not null
);
create unique index idx_binary_ai on
raw_screen (nlssort(name, 'nls_sort=binary_ai'));
explain plan for select * from raw_screen where
nlssort(name,'nls_sort=binary') = nlssort(N'raw_screen1000','nls_sort=binary');
select * from table(dbms_xplan.display(format=>'basic predicate'));
Plan hash value: 237065300
----------------------------------------
| Id | Operation | Name |
----------------------------------------
| 0 | SELECT STATEMENT | |
|* 1 | TABLE ACCESS FULL| RAW_SCREEN |
----------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("NAME"=U'raw_screen1000')
In summary - index DDL needs to exactly match the transformed expressions, which can depend on session settings and the unusual behavior of binary.
When you apply a function to a column in the where clause of a query, any corresponding indexes on this column must also include the function for Oracle to make use of them when executing the query. The NLSSORT function can be applied to strings in your where clause automatically be Oracle if you set NLS_COMP and NLS_SORT appropriately.
To enable case-insensitive searching, we must convert the strings stored in the table by a applying a function such as upper(), lower(), etc. We must then also create a function-based index on the column with the same function we use in our queries.
By changing the NLS_COMP parameter to ANSI and the NLS_SORT parameter to BINARY_CI for a session, Oracle will automatically place the NLSSORT function to strings in your query however! In this case, you don't have to change your query, as Oracle does this for you behind the scenes.

How to compare two Oracle schemas to get delta changes by Altering the table not to drop and recreate it

I've already tried out a tool named TOYS. I found it free but unfortunately it didn't work.
Then, I tried "RED-Gate Schema Compare for Oracle" but it uses the technique to drop and recreate the table mean while I need to just alter the table with the newly added/dropped columns.
Any help is highly appreciated
Thanks
Starting from Oracle 11g you could use dbms_metadata_diff package and specifically compare_alter() function to compare metadata of two schema objects:
Schema #1 HR
create table tb_test(
col number
)
Schema #2 HR2
create table tb_test(
col_1 number
)
select dbms_metadata_diff.compare_alter( 'TABLE' -- schema object type
, 'TB_TEST' -- object name
, 'TB_TEST' -- object name
, 'HR' -- by default current schema
, 'HR2'
) as res
from dual;
Result:
RES
-------------------------------------------------
ALTER TABLE "HR"."TB_TEST" ADD ("COL_1" NUMBER);
ALTER TABLE "HR"."TB_TEST" DROP ("COL");

Resources