What is the effective way to fetch data from two fields in PropelORM? - propel

I need to search name from two fields that is first_name and last_name.
I'm doing following:
->filterByFirstName($fname, Criteria::LIKE)
->_or()
->filterByLastName($lname, Criteria::LIKE)
If I put either first_name or last_name it works but I put both then it returns nothing.
Tried with _and() and if I put first_name and last_name it doesn't works.

Related

Oracle - Unique constraint while allowing null values

I'm a bit new to PL-SQL coming from T-SQL.
I have a requirement that only one phone number is allowed per user ID, but the phone number column can be null as many times as required.
So table is:
User ID
Phone Number
1
NULL
1
9735152122
1
NULL
2
NULL
3
NULL
1
2124821212
It's that last one I need to block, although the first three are fine. In this case I'm talking about the sample table I've posted, not the actual table order. I just need to allow the NULLs through but block if there are duplicate phone numbers per a given User ID.
I've read about functional indexes but not sure exactly how to apply them here.
CREATE UNIQUE INDEX my_index ON my_table (
CASE WHEN phone_number IS NULL THEN NULL ELSE user_id END,
phone_number
)
With this logic, if phone_number is NULL, then both values in the index will be NULL, so that row will be excluded from the index. If phone_number is not NULL, then the row will be included in the index with the actual values for user_id and phone_number, and uniqueness will be enforced.
P.S. This is not "PL/SQL", it is Oracle SQL. PL/SQL is the procedural language used to write such things as triggers, functions, etc.

ORA-00904 for ORDER BY in Oracle

I can not understand what goes wrong in this query:
select last_name, first_name a from employees
order by "a";
Output is:
ORA-00904: "a": invalid identifier
However this query working and sort results by first_name in ascending order:
select last_name, first_name a from employees
order by "A";
Oracle isn't case sensitive by default.
So, when you say
select first_name a from employees
Oracle sees that as
SELECT FIRST_NAME A FROM EMPLOYEES
but when you start using "Quotes"...
order by "a"
Oracle sees that as
ORDER BY "a"
a <> A
This isn't a problem if you ensure your quoted objects are also always capitalized, which is why your "A" works but your "a" doesn't.
My advice - just completely remove the quotes on your object names.
SELECT LAST_NAME,
FIRST_NAME A
FROM EMPLOYEES
ORDER BY A;

Talend Normalize Flat File into Relational Database Tables

We have a single source table, which is flat — we need to insert different fields from a given record into multiple tables. We are successfully using lastInsertID a single time, but we are struggling with how to re-add fields from the same source row again in subsequent related tables.
For example, if we had a mailing address (goofy example coming up, but good for common discussion)
-----------Source----------
First Name
Middle Name
Last Name
Address 1
Address 2
City
State
Zip
-----------Targets-------------
People
address_id
First Name
Last Name
Address
address_id
state_id
zip_id
Address 1
Address 2
States
state_id
State Name
Zip
zip_id
Zip Code
Furthermore, we cannot be sure, that we may not need to add the same column to more than one table.
What is the best practice for this kind of data normalization in Talend?
I would approach this iteratively, normalising part of the table with each step.
You should be able to normalise the person data away from the address, state and zip data in one step and then normalise the state away from the address and zip data and then finally normalise the zip data away from the rest of the address.
As an example, and following on from the example in your question here's a few jobs that will do exactly that:
To start with we should create the example data. I'm going to use MySQL for this example but the same principles apply for any of the major RDBMS'.
Let's create an empty table to start with:
DROP DATABASE IF EXISTS normalisation;
CREATE DATABASE IF NOT EXISTS normalisation
CHARACTER SET utf8 COLLATE utf8_unicode_ci;
CREATE TABLE IF NOT EXISTS normalisation.denormalised (
FirstName VARCHAR(255),
MiddleName VARCHAR(255),
LastName VARCHAR(255),
Address1 VARCHAR(255),
Address2 VARCHAR(255),
City VARCHAR(255),
State VARCHAR(255),
Zip VARCHAR(255)
) ENGINE = INNODB;
Into this we need to populate it with some example data which can be done easily enough with Talend's tRowGenerator component:
I've configured the tRowGenerator to give us some semi sensible testing output:
I've also added an extra step to add some co-habitors to ~1/3 of addresses using the following tMap configuration:
Now that we have our test data easily generated we can move on to actually normalising the data from this denormalised table.
As mentioned above, our first step is to normalise the person data out. We start by creating the necessary tables for the person data and the remaining address data:
CREATE TABLE IF NOT EXISTS normalisation.person (
Person_id BIGINT AUTO_INCREMENT PRIMARY KEY,
FirstName VARCHAR(255),
MiddleName VARCHAR(255),
LastName VARCHAR(255),
Address_id BIGINT
) ENGINE = INNODB;
CREATE TABLE IF NOT EXISTS normalisation.addressStateZip (
Address_id BIGINT AUTO_INCREMENT PRIMARY KEY,
Address1 VARCHAR(50),
Address2 VARCHAR(50),
City VARCHAR(50),
State VARCHAR(50),
Zip VARCHAR(50),
UNIQUE KEY addressStateZip (Address1, Address2, City, State, Zip)
) ENGINE = INNODB;
We then populate these 2 tables by getting all of the address type data, taking only the unique rows and then putting this into the addressStateZip staging table:
The second part of the above job then compares the addressStateZip data to the initial denormalised table and collecting the joins to get the Address_id for the person table:
The remaining steps are now quite similar.
Next we create the state table and another staging table for the address and zip data:
CREATE TABLE IF NOT EXISTS normalisation.state (
State_id BIGINT AUTO_INCREMENT PRIMARY KEY,
State VARCHAR(255),
UNIQUE KEY state (State)
) ENGINE = INNODB;
CREATE TABLE IF NOT EXISTS normalisation.addressZip (
Address_id BIGINT AUTO_INCREMENT PRIMARY KEY,
Address1 VARCHAR(50),
Address2 VARCHAR(50),
City VARCHAR(50),
State_id BIGINT,
Zip VARCHAR(50),
UNIQUE KEY addressStateZip (Address1, Address2, City, State_id, Zip)
) ENGINE = INNODB;
Now we need to take the unique states from the addressStateZip table and put these into the state table:
And the second part, as before, then creates the data into the addressZip staging table with the State_id instead of the actual state:
Now, finally, we can create our zip table and then link that to a proper address table:
CREATE TABLE IF NOT EXISTS normalisation.zip (
Zip_id BIGINT AUTO_INCREMENT PRIMARY KEY,
ZIP VARCHAR(255),
UNIQUE KEY zip (ZIP)
) ENGINE = INNODB;
CREATE TABLE IF NOT EXISTS normalisation.address (
Address_id BIGINT AUTO_INCREMENT PRIMARY KEY,
Address1 VARCHAR(50),
Address2 VARCHAR(50),
City VARCHAR(50),
State_id BIGINT,
Zip_id BIGINT,
UNIQUE KEY addressStateZip (Address1, Address2, City, State_id, Zip_id)
) ENGINE = INNODB;
Using the same methodology as with the state data we get all of the unique zips and put these into the zip table:
And, as before, we can now put the Zip_id into a new, finished address table:
And to check things we can now run the following query to get all the data back out:
SELECT p.FirstName, p.MiddleName, p.LastName, a.Address1, a.Address2, a.City, s.State, z.Zip
FROM normalisation.person AS p
INNER JOIN normalisation.address AS a ON a.Address_id = p.Address_id
INNER JOIN normalisation.state AS s ON s.State_id = a.State_id
INNER JOIN normalisation.zip AS z ON z.Zip_id = a.Zip_id;
You'll probably also want to add some foreign key constraints to the tables now that you're done setting things up:
ALTER TABLE normalisation.person
ADD FOREIGN KEY (Address_id) REFERENCES address(Address_id);
ALTER TABLE normalisation.address
ADD FOREIGN KEY (State_id) REFERENCES state(State_id),
ADD FOREIGN KEY (Zip_id) REFERENCES zip(Zip_id);
I highly doubt this is the best practice, but it is the best of my knowledge. I have 2 different approaches for these kind of stuff:
Assuming [First Name, Last Name] is unique and that people may share addresses I would:
Insert Zip and States checking if they already exist. In that case would not insert;
Insert Address with a lookup on States and Zip to get state_id and zip_id.
Insert People with a lookup first on Zip and States again. Then a lookup on Address to get address_id and finnaly inserting on People if it does not exist already.
If [First Name, Last Name] are not unique or for some reason I don't want them to share addresses, zips or states I usually force the source to have some kind of ID, an explicit or an implicit one like LINE_NUMBER so we can distinguish people. The inserting order would then be the same but in this case I use people_id to distinguish addresses, zip, and state and even people in some of the lookups, depending on what's the intended result.
This last approach is kind of dirty since we might end having useless Ids that were only needed for insertion. To avoid this I would use tmp tables alike with extra fields and in the end would just insert blindly in the final table. If this is not a one time insert, it would require some extra logic on tmp and final tables sync.

"Create table as select" does not preserve not null

I am trying to use the "Create Table As Select" feature from Oracle to do a fast update. The problem I am seeing is that the "Null" field is not being preserved.
I defined the following table:
create table mytable(
accountname varchar2(40) not null,
username varchar2(40)
);
When I do a raw CTAS, the NOT NULL on account is preserved:
create table ctamytable as select * from mytable;
describe ctamytable;
Name Null Type
----------- -------- ------------
ACCOUNTNAME NOT NULL VARCHAR2(40)
USERNAME VARCHAR2(40)
However, when I do a replace on accountname, the NOT NULL is not preserved.
create table ctamytable as
select replace(accountname, 'foo', 'foo2') accountname,
username
from mytable;
describe ctamytable;
Name Null Type
----------- ---- -------------
ACCOUNTNAME VARCHAR2(160)
USERNAME VARCHAR2(40)
Notice that the accountname field no longer has a null, and the varchar2 field went from 40 to 160 characters. Has anyone seen this before?
This is because you are no longer selecting ACCOUNTNAME, which has a column definition and meta-data. Rather you are selecting a STRING, the result of the replace function, which doesn't have any meta-data. This is a different data type entirely.
A (potentially) better way that might work is to create the table using a query with the original columns, but with a WHERE clause that guarantees 0 rows.
Then you can insert in to the table normally with your actual SELECT.
By having query of 0 rows, you'll still get the column meta-data, so the table should be created, but no rows will be inserted. Make sure you make your WHERE clause something fast, like WHERE primary_key = -999999, some number you know would never exist.
Another option here is to define the columns when you call the CREATE TABLE AS SELECT. It is possible to list the column names and include constraints while excluding the data types.
An example is shown below:
create table ctamytable (
accountname not null,
username
)
as
select
replace(accountname, 'foo', 'foo2') accountname,
username
from mytable;
Be aware that although this syntax is valid, you cannot include the data type. Also, explicitly declaring all the columns somewhat defeats the purpose of using CREATE TABLE AS SELECT.

Get primary key column of views

Is there way to retrieve view list along with primary key column name if that view is created with primary key column of dependent table?
E.g.:
Employee(ID PRIMARY KEY, FIRST NAME, LAST NAME, SALARY, DEPARTMENT)
The view derived from Employee table:
EMPLOYEEVIEW(ID, FIRST NAME, LAST NAME)
EMPLOYEEVIEW satisfies my constraint. I need to get these kind of views.
The desired result is something like EMPLOYEEVIEW ID.
To fetch the primary key constraints of the tables in the current schema, you can use this query:
select *
from user_constraints
where constraint_type = 'P'
so to search your view for primary keys I'd use a query like this
select *
from user_views v
join user_constraints c on upper(v.text) like '%'||c.table_name||'%'
where c.constraint_type = 'P'
and v.view_name = 'YOUR_VIEW_NAME'
Unfortunately the text field in the user_views view is of the horrible datatype LONG, so you will need to create your own function (or google one) to convert the LONG to VARCHAR, so you can use upper() and like on it.

Resources