I am facing an issue with my datastage job. I have to fill a table ttperiodeas in Oracle from a .csv file. The SQL query in Oracle connector is shown in this screenshot:
Oracle connector
And here is the oracle script
CREATE TABLE TTPERIODEAS
(
CDPARTITION VARCHAR2(5 BYTE) NOT NULL ENABLE,
CDCOMPAGNIE NUMBER(4,0) NOT NULL ENABLE,
CDAPPLI NUMBER(4,0) NOT NULL ENABLE,
NUCONTRA CHAR(15 BYTE) NOT NULL ENABLE,
DTDEBAS NUMBER(8,0) NOT NULL ENABLE,
DTFINAS NUMBER(8,0) NOT NULL ENABLE,
TAUXAS NUMBER(8,5) NOT NULL ENABLE,
CONSTRAINT PK_TTPERIODEAS
PRIMARY KEY (CDPARTITION, CDCOMPAGNIE, CDAPPLI, NUCONTRA, DTDEBAS)
)
PARTITION BY LIST(CDPARTITION)
(PARTITION P_PERIODEAS_13Q VALUES ('13Q'));
When running the job, I get the following message error and the table is not filled.:
The index 'USINODSD0.SYS_C00249007' its partition is unusable
Please I need help thanks
The index is global (i.e. not partitioned) because there is no using index local at the end of the definition. This is also true for the PK index shown above. (I'm assuming they are two different things, because by default the DDL above would create an index named PK_TTPERIODEAS, so I'm not sure what SYS_C00249007 is.) If you can drop and rebuild them as local indexes (i.e. partitioned to match the table) then truncating or dropping a partition will no longer invalidate indexes.
For example, you could rebuild the primary key as:
alter table ttperiodeas
drop primary key;
alter table ttperiodeas
add constraint pk_ttperiodeas primary key (cdpartition,cdcompagnie,cdappli,nucontra,dtdebas)
using index local;
I don't know how SYS_C00249007 is defined, but you could use something similar.
The create table command might be something like:
create table ttperiodeas
( cdpartition varchar2(5 byte) not null
, cdcompagnie number(4,0) not null
, cdappli number(4,0) not null
, nucontra varchar2(15 byte) not null
, dtdebas number(8,0) not null
, dtfinas number(8,0) not null
, tauxas number(8,5) not null
, constraint pk_ttperiodeas
primary key (cdpartition,cdcompagnie,cdappli,nucontra,dtdebas)
using index local
)
partition by list(cdpartition)
( partition p_periodeas_13q values ('13Q') );
Alternatively, you could add the update global indexes clause when dropping the partition:
alter table demo_temp drop partition p_periodeas_14q update global indexes;
(By the way, NUCONTRA should probably be a standard VARCHAR2 and not CHAR, which is intended for cross-platform compatibility and ANSI completeness, and in practice just wastes space and creates bugs.)
the message says that the index for the given partition is unusable: so you could try to rebuild the correponding index partition by the use of
create index [index_name] rebuild partition [partition_name]
(with the fitting values for [index_name] and [partition_nme].
Before you do that you should check the status of the index partitions in user_indexes - since your error message looks not like Oracle error messages usually do.
But since the index is global as William Robertson pointed out, this is not applicable for the given situation.
Related
Here is the code I'm using in Oracle SQL Developer :
CREATE TABLE ORDER_ITEMS(
ITEM_NO NUMBER(10),
ITEM_DESCRIPTION VARCHAR(50),
SIZE VARCHAR(5),
COST NUMBER(8,2),
QUANTITY NUMBER(10),
TOTAL NUMBER(8,2),
ITEM_ORDER_NO NUMBER(10),
CONSTRAINT ITM_NO_PK PRIMARY KEY (ITEM_NO));
The error has to do with the SIZE and COST tables, if I change the names on those two tables (for example put an A at the end of them (SIZEA COSTA)) then the code works. Why are these table names invalid ?
I think you mean column where you're writing table. Also SIZE is a reserved word in Oracle SQL, as is NUMBER.
https://docs.oracle.com/cd/B19306_01/server.102/b14200/ap_keywd.htm
I'm new to Postgres and even newer to understanding how explain works. I have a query below which is typical, I just replace the date:
explain
select account_id,
security_id,
market_value_date,
sum(market_value) market_value
from market_value_history mvh
inner join holding_cust hc on hc.id = mvh.owning_object_id
where
hc.account_id = 24766
and market_value_date = '2015-07-02'
and mvh.created_by = 'HoldingLoad'
group by account_id, security_id, market_value_date
order by security_id, market_value_date;
Attached is a screenshot of explain
The count for holding_cust table is 2 million rows and market_value_history table has 163 million rows
Below are the table definitions and indexes for market_value_history and holding_cust:
I'd appreciate any advice you may be able to give me on tuning this query.
CREATE TABLE public.market_value_history
(
id integer NOT NULL DEFAULT nextval('market_value_id_seq'::regclass),
market_value numeric(18,6) NOT NULL,
market_value_date date,
holding_type character varying(25) NOT NULL,
owning_object_type character varying(25) NOT NULL,
owning_object_id integer NOT NULL,
created_by character varying(50) NOT NULL,
created_dt timestamp without time zone NOT NULL,
last_modified_dt timestamp without time zone NOT NULL,
CONSTRAINT market_value_history_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.market_value_history
OWNER TO postgres;
-- Index: public.ix_market_value_history_id
-- DROP INDEX public.ix_market_value_history_id;
CREATE INDEX ix_market_value_history_id
ON public.market_value_history
USING btree
(owning_object_type COLLATE pg_catalog."default", owning_object_id);
-- Index: public.ix_market_value_history_object_type_date
-- DROP INDEX public.ix_market_value_history_object_type_date;
CREATE UNIQUE INDEX ix_market_value_history_object_type_date
ON public.market_value_history
USING btree
(owning_object_type COLLATE pg_catalog."default", owning_object_id, holding_type COLLATE pg_catalog."default", market_value_date);
CREATE TABLE public.holding_cust
(
id integer NOT NULL DEFAULT nextval('holding_cust_id_seq'::regclass),
account_id integer NOT NULL,
security_id integer NOT NULL,
subaccount_type integer,
trade_date date,
purchase_date date,
quantity numeric(18,6),
net_cost numeric(18,2),
adjusted_net_cost numeric(18,2),
open_date date,
close_date date,
created_by character varying(50) NOT NULL,
created_dt timestamp without time zone NOT NULL,
last_modified_dt timestamp without time zone NOT NULL,
CONSTRAINT holding_cust_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.holding_cust
OWNER TO postgres;
-- Index: public.ix_holding_cust_account_id
-- DROP INDEX public.ix_holding_cust_account_id;
CREATE INDEX ix_holding_cust_account_id
ON public.holding_cust
USING btree
(account_id);
-- Index: public.ix_holding_cust_acctid_secid_asofdt
-- DROP INDEX public.ix_holding_cust_acctid_secid_asofdt;
CREATE INDEX ix_holding_cust_acctid_secid_asofdt
ON public.holding_cust
USING btree
(account_id, security_id, trade_date DESC);
-- Index: public.ix_holding_cust_security_id
-- DROP INDEX public.ix_holding_cust_security_id;
CREATE INDEX ix_holding_cust_security_id
ON public.holding_cust
USING btree
(security_id);
-- Index: public.ix_holding_cust_trade_date
-- DROP INDEX public.ix_holding_cust_trade_date;
CREATE INDEX ix_holding_cust_trade_date
ON public.holding_cust
USING btree
(trade_date);
Two things:
As Dmitry pointed out, you should look at creating an Index on market_value_date field. Its possible that post that you have a completely different query plan, which may or may not bring up other bottlenecks, but it should certainly remove this seq-Scan.
Minor (since I doubt if it affects performance), but secondly, if you aren't enforcing field length by design, you may want to change createdby field to TEXT. As can be seen in the query, its trying to cast all createdby fields to TEXT for this query.
I am new to Oracle and for the sake of learning, I need to know how to create a table so that the newest records inserted are at the top.
In TSQL: I will do a CLUSTERED INDEX with Decrement on a Unique Column.
Using Oracle SQL Developer, below is a sample table: I want the record with the most recent ORDER_DATE to be on top. Note: The Date is stored as a string. I also tried using REVERSE on the Primary Key column but that did not do it.
CREATE TABLE ORDERS
(
ORDER_NBR NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY
INCREMENT BY 10
MAXVALUE 9999999999999999999999999999
MINVALUE 135790864211
CACHE 20 NOT NULL
, CUSTOMER_ID NUMBER NOT NULL
, ORDER_TYPE NUMBER NOT NULL
, ORDER_DATE NVARCHAR2(27) NOT NULL
, RETURN_DATE NVARCHAR2(27)
, CONSTRAINT PK_ORDER_NBR_ORDERS PRIMARY KEY
(
ORDER_NBR
));
CREATE UNIQUE INDEX IDX_ORDER_DATE_ORDERS ON ORDERS (ORDER_DATE DESC);
CREATE INDEX IDX_RETURN_DATE_ORDERS ON ORDERS (RETURN_DATE DESC);
I cannot figure out the syntax issue with the following code. When I run it, I get error
ORA-00907: missing right parenthesis
Can anyone point out my flaw please?
CREATE OR REPLACE VIEW LATESTAPPLICATIONS AS
SELECT *
FROM application_history
WHERE entry_time IN
(SELECT entry_time
FROM application_history
GROUP BY application_number; )
ORDER BY entry_number;
Here is the table definition for application_history. I ideally want to only view application numbers with the latest time-stamps.
CREATE TABLE "APPLICATION_HISTORY"
( "ENTRY_NUMBER" NUMBER(28,0),
"APPLICATION_NUMBER" NUMBER(16,0) CONSTRAINT "APP_NUM_NN" NOT NULL ENABLE,
"ACTIVE" CHAR(1) DEFAULT 0 CONSTRAINT "ACTIVE_NN" NOT NULL ENABLE,
"STATUS" VARCHAR2(40) DEFAULT 'APPLICATION ENTERED' CONSTRAINT "STATUS_NN" NOT NULL ENABLE,
"DATE_APPROVED" DATE,
"DATE_APPLIED" DATE DEFAULT SYSDATE CONSTRAINT "DATE_APPLIED_NN" NOT NULL ENABLE,
"ENTRY_TIME" TIMESTAMP (6) DEFAULT SYSDATE,
CONSTRAINT "ENTRY_NUM_PK" PRIMARY KEY ("ENTRY_NUMBER")
USING INDEX ENABLE
)
You have a semi-colon at the last but 1 line. GROUP BY application_number; ) Remove it and you should be fine.
Semi-colon acts as the query terminator in Oracle, as you had placed it before the ) Oracle could not find it.
I created a table in oracle10g using following query......
CREATE TABLE "EMPLOYEESTASKS"
( "EMPLOYEEID" NUMBER,
"TASKDATE" VARCHAR2(40),
"STATUS" NUMBER,
"CUSTOMERID" NUMBER,
"ADDRESS" VARCHAR2(400) NOT NULL ENABLE,
"TASKTIME" VARCHAR2(40) NOT NULL ENABLE,
"VISITDATE" VARCHAR2(40),
"VISITTIME" VARCHAR2(40),
CONSTRAINT "EMPLOYEESTASKS_PK" PRIMARY KEY ("EMPLOYEEID", "TASKDATE", "TASKTIME") ENABLE,
CONSTRAINT "EMPLOYEESTASKS_FK" FOREIGN KEY ("EMPLOYEEID")
REFERENCES "EMPLOYEES" ("ID") ON DELETE CASCADE ENABLE
)
Table was created successfully... but the problem is while iam trying to insert a row into the table it is showing the error
ORA-01722: invalid number
The query i used is ,
insert into employeestasks values(12305,'30-11-2011','09:00',0,45602,'Sarpavaram Junction ,kakinada',null,null)
What is that invalid number..??
It look like your columns in the table are ordered employeeid, taskdate, status, and you're trying to insert '09:00' into status, which is a number. This is no good. You need to use the same order of columns or specify which value is for which column.
Also, you really like capslock, huh?