I have a database table in Oracle 11g created and populated with the following code:
CREATE TABLE TEST_TABLE (CODE NUMBER(1,0));
INSERT INTO TEST_TABLE (CODE) VALUES (3);
Now, I want to use this table as a lockup table in a Pentaho Kettle transformation. I want to make sure that the value of a column comes from this table, and if not abort. I have the following setup:
The data frame has a single column called Test of type integer, and a single row. The lookup is configured like this:
However the lookup always fail and the transformation is aborted, no matter if the value of Test is 3 (should be ok), or 4 (should be aborted). However, if I check the "Load all data from table" box, it works as expected.
So my question is this: Why does it not work unless I cache the whole table?
Two further observations:
When it works, and the row is printed in log, I notice that Test is printed like [ 3] and From DB is printed like [3] (without the extra space). I don't know if this is of any significance, though.
If I change the database table so that CODE is created as INT, it works. This leads me to believe it is somehow related to the number formatting. (In my actual application, I can not change the database tables.) I guess I should change the format of Test, but to what? Setting it to Number does not help, nor does Number with length 1 and precision 0.
Related
I am trying to implement my SQLDeveloper DB into Oracle APEX. I cannot figure out how to get the PK's in my table to auto-increment starting from a certain value (i.e. 400001). I have tried making triggers and sequences but when I try to add a row using a form in APEX, my PK increments from 40 for some reason.
Here is my APEX form outcome
enter image description here
Here is how it inserts into SQL Developer
enter image description here
Basically, can someone describe to me how I can edit the existing trigger, or create a sequence, that would make application_id of a new entry auto-increment by 1.
Thanks!
Find max application_id:
select max(application_id) From your_Table;
Suppose it is 400010 (as screenshot suggests). Now recreate the sequence (presuming its name is seq_app):
drop sequence seq_app;
create sequence seq_app start with 400011 increment by 1 nocache;
Trigger is most probably OK, as you see values being inserted into the table.
Side note: sequences will be unique, but not necessarily gapless. CACHE (or NOCACHE) might affect that, but - for performance sake, you'd rather let Oracle cache sequence numbers (default is 20) which means that - if you don't use some of those cached numbers, they will be lost. I wouldn't worry, if I were you.
I wanna create a script for table that should include
Create Table statement
Data in the table
Sequence in the table(Only sequence code)
And Trigger associated to it
I have added Sequence and trigger for auto increment ID, I searched but I couldn't get enough answers for Sequence in trigger.
I understand you, partially.
In order to get CREATE TABLE statement, choose that table and on right-hand side of the screen navigate to the "Script" tab - there it is. Apart from CREATE TABLE, it contains some more statements (such as ALTER TABLE in order to add constraints, CREATE INDEX and your number 4 - CREATE TRIGGER).
As of the sequence: it is a separate object, which is not related to any table. One sequence can be used to provide unique numbers for many tables, so - I'm not sure what is it that you are looking for.
In order to get data from that table, right-click table name; in menu choose "Export data" >> "Insert statements". That'll create bunch of INSERT INTO commands. That's OK if table is small; for large ones, you'll get old before it finishes.
The last sentence leads to another suggestion: why would you want to do it that way? A proper option is to export that table, using either Data Pump or the Original EXP utility.
[EDIT]
After you insert data "as is" (i.e. no changes in ID column values), disable trigger and run additional update. If we suppose that sequence name is MY_SEQ (create it the way you want it, specifying its start value etc.), it would be as simple as
update your_table set id = my_seq.nextval;
Once it is done, enable the trigger so that it fires for newly added rows.
When I execute the UPSERT command on apache phoenix, I always see that Phoenix add an extra column (named _0) with an empty value in the hbase, this column(_0) is auto generate by phoenix, but I don't need it, like this:
ROW COLUMN+CELL
abc column=F:A,timestamp=1451305685300,value=123
abc column=F:_0, timestamp=1451305685300, value= # I want to avoid generate this row
Could you tell me how to avoid that? Thank you very much!
"At create time, to improve query performance, an empty key value is
added to the first column family of any existing rows or the default
column family if no column families are explicitly defined. Upserts will also add this empty key value. This improves query performance by having a key value column we can guarantee always being there and thus minimizing the amount of data that must be projected and subsequently returned back to the client."
Apache Phoenix Documentation
Regarding your question if that is avoidable:
You could work around the problem by adding the following statements at the end of your sql:
ALTER TABLE "<your-table>" ADD "<your-cf>"."_0" VARCHAR(1);
ALTER TABLE "<your-table>" DROP COLUMN "<your-cf>"."_0";
You should only do this if you query some table with phoenix but then access the table with another system that is not aware of this phoenix-specific dummy value.
I know this question has been asked more than once here. But I am not able to resolve my issue so posting it again for help.
I have a table called Transaction in Oracle database (11g) with 2.7 million records. There is a not-null varchar2(20) (txn_id) column which contains numeric values. This is not the primary key of the table, and most of the values are unique. By most of the values I mean there are cases where one value can be there 3-4 times in the table.
If I perform a simple query of select based on TXN_ID it take about 5 seconds or more to return the result.
Select * from Transaction t where t.txn_id = 245643
I have an index created on this column, but when I check the explain plan for above query, it is using full table scan. This query is being used many times in the application which is making the application slow.
Can you please provide some help what might be causing this issue?
You are comparing a varchar column with a numeric literal (245643). This forces Oracle to convert one side of the equality, and off hand, it seems as though it's choosing the "wrong" side. Instead of having to guess how Oracle will handle this conversion, use a character literal:
SELECT * FROM Transaction t WHERE t.txn_id = '245643'
I have some large tables (millions of rows). I constantly receive files containing new rows to add in to those tables - up to 50 million rows per day. Around 0.1% of the rows I receive are duplicates of rows I have already loaded (or are duplicates within the files). I would like to prevent those rows being loaded in to the table.
I currently use SQLLoader in order to have sufficient performance to cope with my large data volume. If I take the obvious step and add a unique index on the columns which goven whether or not a row is a duplicate, SQLLoader will start to fail the entire file which contains the duplicate row - whereas I only want to prevent the duplicate row itself being loaded.
I know that in SQL Server and Sybase I can create a unique index with the 'Ignore Duplicates' property and that if I then use BCP the duplicate rows (as defined by that index) will simply not be loaded.
Is there some way to achieve the same effect in Oracle?
I do not want to remove the duplicate rows once they have been loaded - it's important to me that they should never be loaded in the first place.
What do you mean by "duplicate"? If you have a column which defines a unique row you should setup a unique constraint against that column. One typically creates a unique index on this column, which will automatically setup the constraint.
EDIT:
Yes, as commented below you should setup a "bad" file for SQL*Loader to capture invalid rows. But I think that establishing the unique index is probably a good idea from a data-integrity standpoint.
Use Oracle MERGE statement. Some explanations here.
You dint inform about what release of Oracle you have. Have a look at there for merge command.
Basically like this
---- Loop through all the rows from a record temp_emp_rec
MERGE INTO hr.employees e
USING temp_emp_rec t
ON (e.emp_ID = t.emp_ID)
WHEN MATCHED THEN
--- _You can update_
UPDATE
SET first_name = t.first_name,
last_name = t.last_name
--- _Insert into the table_
WHEN NOT MATCHED THEN
INSERT (emp_id, first_name, last_name)
VALUES (t.emp_id, t.first_name, t.last_name);
I would use integrity constraints defined on the appropriate table columns.
This page from the Oracle concepts manual gives an overview, if you also scroll down you will see what types of constraints are available.
use below option, if you will get this much error 9999999 after that your sqlldr will terminate.
OPTIONS (ERRORS=9999999, DIRECT=FALSE )
LOAD DATA
you will get duplicate records in bad file.
sqlldr user/password#schema CONTROL=file.ctl, LOG=file.log, BAD=file.bad