Newbie with Elastic Stack
I have an oracle database table where rows gets inserted daily at 12 AM and later on those column values will get updated if there is any change in their values.
I tried with doc_as_upsert but the values aren't updating instead of that new rows getting inserted and creating duplicates of that data
There is no unique id for that table.
Can I use elastic id as reference for updating data.
Can anyone suggest any solution for this problem
Thanks in advance
Related
Hi any one can help me on this issue which i have problem insert a dynamic value which is from previous database table value to pass to another table in Katalon.
Please find my information below:-
This screenshot is ab.dbo.DOCUMENT table which DOCUMENT_ID is auto populate with value
which mean it will appear random number by itself.
Another screenshot is bc.dbo.DOCUMENT_IC table which i need to manually key in DOCUMENT_ID in
the value base on on what it is given from ab.dbo.DOCUMENT table DOCUMENT_ID.
Attached of a screenshot for bc.dbo.DOCUMENT_IC
In Katalon i am using a keyword to connect my database, insert query and close
connection. I am aware of this step and able to connect to database with katalon. But i
am not very sure how to pass a dynamic value from ab.dbo.DOCUMENT table DOCUMENT_ID which
it can randomly appear a number value to bc.dbo.DOCUMENT_IC table DOCUMENT_ID which i need to
manually key in a value base on the value given.
Below is my Katalon script:-
Hopefully someone can help me on this
Thank you.
If I have a table with an auto incrementing ID in one table and I need that value elsewhere I would typically write sql like this :
insert into firsttable (Document_Type) values ('PDF');
insert into secondtable (Document_ID, App_ref_Num) values (##Identity, 'somenumber')
In the databases I have worked with ##Identity will give you the integer or id of the last inserted row. If you can't run multiple statements most connection libraries will have something like a $conn->insert_id that will do the same thing as running select ##identity.
So, i am a begginer on ORACLE and realy would apreciate your help.
I have 3 tables, EMPLOEES, PERSONAL_DATA and RECORDS. I want to create an UPDATE TRIGGER that when fires takes the old values of EMPLOOES finds the personal data of that updating emplooe on the PERSONAL_DATA table with the OLD id and insert all of that data( the OLD of EMPLOOES and the one fetched from PERSONAL_DATA) into the RECORDS table. I been triying to use the SELECT sentence to fetch information from the table PERSONAL_DATA, but the compiler throws me an error.
When I execute the UPSERT command on apache phoenix, I always see that Phoenix add an extra column (named _0) with an empty value in the hbase, this column(_0) is auto generate by phoenix, but I don't need it, like this:
ROW COLUMN+CELL
abc column=F:A,timestamp=1451305685300,value=123
abc column=F:_0, timestamp=1451305685300, value= # I want to avoid generate this row
Could you tell me how to avoid that? Thank you very much!
"At create time, to improve query performance, an empty key value is
added to the first column family of any existing rows or the default
column family if no column families are explicitly defined. Upserts will also add this empty key value. This improves query performance by having a key value column we can guarantee always being there and thus minimizing the amount of data that must be projected and subsequently returned back to the client."
Apache Phoenix Documentation
Regarding your question if that is avoidable:
You could work around the problem by adding the following statements at the end of your sql:
ALTER TABLE "<your-table>" ADD "<your-cf>"."_0" VARCHAR(1);
ALTER TABLE "<your-table>" DROP COLUMN "<your-cf>"."_0";
You should only do this if you query some table with phoenix but then access the table with another system that is not aware of this phoenix-specific dummy value.
I'm new to both Oracle and Informatica.
Currently working on a small task where I need to select all records from the source table, filter the results to get only records where field1='Y' and finally insert new rows into the target table that contains only src.field2 and src.field3 values.
These 2 fields are used for the PK and for the Index of the target table.
So i get an error in Informatica:
"ORA-26002: Table has index defined upon it"
I rather not dropping the index? is there a work around?
I've tried alter index to "unusable" but I got the same error.
Please advice.
Thanks.
Try to use Normal load mode instead of Bulk. You can set in session properties for the target.
I had created Elastic search index using hive.
Here, I have one temp table, where load all the raw data.
From that table select some data on some criteria and insert them to a table which is integrated with Elastic search index.
After index creation I am comparing the count at hive table (in the main table on same criteria), on the table integrated with ES and elastic search index.
found count does not same.
In ES index it is: 4663296
On table integrated with ES: 4663296 (same as ES)
but in hive it's : 4611296 (main table on same criteria) - less then ES
So could some one please tell me why this count is more in ES. It should be same, am I right?
Thanks,
Rackto
It was found that there was some duplicate records in the ES.
So, what I am doing, add the id manually (some key in the data which is always unique), now the count is same.
Just need to add one table properties:
TBLPROPERTIES('......., 'es.mapping.id' = 'field_name_of_the_unique_id'); in hive table creation.
Thanks