I'm new to both Oracle and Informatica.
Currently working on a small task where I need to select all records from the source table, filter the results to get only records where field1='Y' and finally insert new rows into the target table that contains only src.field2 and src.field3 values.
These 2 fields are used for the PK and for the Index of the target table.
So i get an error in Informatica:
"ORA-26002: Table has index defined upon it"
I rather not dropping the index? is there a work around?
I've tried alter index to "unusable" but I got the same error.
Please advice.
Thanks.
Try to use Normal load mode instead of Bulk. You can set in session properties for the target.
Related
Hi any one can help me on this issue which i have problem insert a dynamic value which is from previous database table value to pass to another table in Katalon.
Please find my information below:-
This screenshot is ab.dbo.DOCUMENT table which DOCUMENT_ID is auto populate with value
which mean it will appear random number by itself.
Another screenshot is bc.dbo.DOCUMENT_IC table which i need to manually key in DOCUMENT_ID in
the value base on on what it is given from ab.dbo.DOCUMENT table DOCUMENT_ID.
Attached of a screenshot for bc.dbo.DOCUMENT_IC
In Katalon i am using a keyword to connect my database, insert query and close
connection. I am aware of this step and able to connect to database with katalon. But i
am not very sure how to pass a dynamic value from ab.dbo.DOCUMENT table DOCUMENT_ID which
it can randomly appear a number value to bc.dbo.DOCUMENT_IC table DOCUMENT_ID which i need to
manually key in a value base on the value given.
Below is my Katalon script:-
Hopefully someone can help me on this
Thank you.
If I have a table with an auto incrementing ID in one table and I need that value elsewhere I would typically write sql like this :
insert into firsttable (Document_Type) values ('PDF');
insert into secondtable (Document_ID, App_ref_Num) values (##Identity, 'somenumber')
In the databases I have worked with ##Identity will give you the integer or id of the last inserted row. If you can't run multiple statements most connection libraries will have something like a $conn->insert_id that will do the same thing as running select ##identity.
Newbie with Elastic Stack
I have an oracle database table where rows gets inserted daily at 12 AM and later on those column values will get updated if there is any change in their values.
I tried with doc_as_upsert but the values aren't updating instead of that new rows getting inserted and creating duplicates of that data
There is no unique id for that table.
Can I use elastic id as reference for updating data.
Can anyone suggest any solution for this problem
Thanks in advance
I'm using the Data Skipping Indexes feature in clickhouse and i got confused about its usage. If i add a data skip index when i create the table like this:
CREATE TABLE MyTable
(
...
INDEX index_time TimeStamp TYPE minmax GRANULARITY 1
)
ENGINE =MergeTree()
...
When i query with TimeStamp filter condition the 'index_time' works. But if i didn't add index when creating table, alternatively, i added the index with Manipulations With Data Skipping Indicesfeature like this:
ALTER TABLEE MyTable ADD INDEX index_time TimeStamp TYPE minmax GRANULARITY 1
Then the index 'index_time' doesn't work.
My database is running on production so i can't recreate the table. I have to use the second way. Can anyone explain why it does not work or i used the feature in a wrong way?
The reason your queries don't use the index after an ALTER TABLE ADD INDEX is because the index does not exist yet. (!)
Any new data will be properly indexed, which is why your index works when you put it in CREATE TABLE. ClickHouse builds the index as you load data. If you created the table, ran ALTER TABLE ADD INDEX, and loaded data you would see the same behavior.
When the data already exist, things are different. ALTER TABLE updates the metadata for the table, but at this point all your data have been written to parts in the table. ClickHouse does not rewrite parts automatically to implement new indexes. However, you should be able to force rewriting to include the index by running:
OPTIMIZE TABLE MyTable FINAL
See the Github issue https://github.com/yandex/ClickHouse/issues/6561 referenced by Ruijang for more information.
It's quite right that
OPTIMIZE TABLE my_table_name FINAL;
does recreate the indexes set to the table. But there's some scenarios in a columnar DB where you want to avoid rewriting EVERYTHING. If you just add a single index to an already existing table with lots of data when you just rebuilt the new index which includes two steps:
Step 1 - Define the index
Creating the INDEX itself just defines what the index should do, which reflects in Clickhouse as metadata that's added to the table. Thus there is no index build up really, thus nothing will be faster. It's also a lightweight operation as it won't change data or build up any structures beside the table metadata.
It's important to understand any new incoming data will be indexed on insert, but any pre existing data is not included!
ALTER TABLE my_table_name ADD INDEX my_index(my_expression) TYPE minmax GRANULARITY 1
Note Clickhouse can index expressions, so it could simply be the column name as in the question or a more complex expression (e.g. my_index(price * sold_items * revshare)). The index will work on that expression only of course.
Step 2 - Build up (materialize) the index
After creation of the metadata the index for existing data need to be build up. This action is called materialize and needs to be explicitly triggered. Good thing is you can do this individually for any index that was added or changed. This is a heavy operation as it'll trigger work on the database.
ALTER TABLE my_table_name MATERIALIZE INDEX my_index;
Also have a look at the Clickhouse docs for Manipulating Data Skipping Indices
I am trying to load the tables into oracle in-memory database. I have enable the tables for INMEMORY by using sql+ command ALTER TABLE table_name INMEMORY. The table also contains data i.e. the table is populated. But when I try to use the command SELECT v.owner, v.segment_name name, v.populate_status status from v$im_segments v;, it shows no rows selected.
What can be the problem?
Have you considered this?
https://docs.oracle.com/database/121/CNCPT/memory.htm#GUID-DF723C06-62FE-4E5A-8BE0-0703695A7886
Population of the IM Column Store in Response to Queries
Setting the INMEMORY attribute on an object means that this object is a candidate for population in the IM column store, not that the database immediately populates the object in memory.
By default (INMEMORY PRIORITY is set to NONE), the database delays population of a table in the IM column store until the database considers it useful. When the INMEMORY attribute is set for an object, the database may choose not to materialize all columns when the database determines that the memory is better used elsewhere. Also, the IM column store may populate a subset of columns from a table.
You probably need to run a select against the date first
When I execute the UPSERT command on apache phoenix, I always see that Phoenix add an extra column (named _0) with an empty value in the hbase, this column(_0) is auto generate by phoenix, but I don't need it, like this:
ROW COLUMN+CELL
abc column=F:A,timestamp=1451305685300,value=123
abc column=F:_0, timestamp=1451305685300, value= # I want to avoid generate this row
Could you tell me how to avoid that? Thank you very much!
"At create time, to improve query performance, an empty key value is
added to the first column family of any existing rows or the default
column family if no column families are explicitly defined. Upserts will also add this empty key value. This improves query performance by having a key value column we can guarantee always being there and thus minimizing the amount of data that must be projected and subsequently returned back to the client."
Apache Phoenix Documentation
Regarding your question if that is avoidable:
You could work around the problem by adding the following statements at the end of your sql:
ALTER TABLE "<your-table>" ADD "<your-cf>"."_0" VARCHAR(1);
ALTER TABLE "<your-table>" DROP COLUMN "<your-cf>"."_0";
You should only do this if you query some table with phoenix but then access the table with another system that is not aware of this phoenix-specific dummy value.