Delphi: Sort ClientDataSet by datetime asc, nulls last - sorting

I need to sort a ClientDataSet by a DateTime field, e.g. next_due_date, in ascending order and with null values last.
I will be adding new records at runtime and I am not allowed to execute the SQL query again.
Can you use an index on a ClientDataSet in such a way?

you could create internal calculated field in cds, populate it as your needs dictate and create an index referring to that field

Related

Deduplication in Oracle

Situation:-
Table 'A' is receiving data from OracleGoldenGate feed and gets the data as New,Updated,Duplicate feed that either creates a new record or rewrites the old one based on it's characteristics (N/U/D). Every entry in table has its UpdatedTimeStamp column contain insertion timestamp.
Scope:-
To write a StoredProcedure in Oracle that pulls the data for a time period based on UpdatedTimeStamp column and publishes an xml using DBMSXMLGEN.
How can I ensure that a duplicate entered in the table is not processed again ??
FYI-am currently filtering via a new table that I created, named as 'A-stg' and has old data inserted incrementally.
As far as I understood the question, there are a few ways to avoid duplicates.
The most obvious is to use DISTINCT, e.g.
select distinct data_column from your_table
Another one is to use timestamp column and get only the last (or the first?) value, e.g.
select data_column, max(timestamp_column)
from your_table
group by data_column

Get latest row using Laravel?

It appear I am not getting latest row when rows have actually same created_at value.
Using $model->latest()->first() - I am getting first row rather than last row of created_at.
How to solve this?
latest() will use the created_at column by default.
If all of your created_at values are the exact same, this obviously won't work...
You can pass a column name to latest() to tell it to sort by that column instead.
You can try:
$model->latest('id')->first();
This assumes you have an incrementing id column.
This will entirely depend on what other data you have in your table. Any query of a relational database does not take the "physical position" into account - that is, there is no such thing as being able to get the "last inserted row" of a table until you are able check some value in the table that indicates it is the probably the last row.
One of the common ways to do this is to have a auto-incrementing unique key in the database (often the Primary Key), and you can simply get the largest value in that set. It's not guaranteed to be the last row inserted, but for most applications this is usually true.
What you need is the equivalent query to be executed
SELECT * FROM Table WHERE created_at = ? ORDER BY ID DESC LIMIT 1
or, in Eloquent ORM
$model->where('created_at', '=', ?)->orderBy('id', 'desc')->take(1)->first();
Keep in mind that you'll probably need other filters, since it is entirely possible other users or processes may insert records at the same time generating the same creation date, and you'll end up with somebody elses records.

Create index for last two digits of number in Oracle

I have a massive table in which I can't do any more partitioning or sub-partitioning, nor am allowed to do any alter. I want to query its records by batches, and thought a good way would be using the last two digits from the account numbers (wouldn't have any other field splitting records as evenly).
I guess I'd need to at least index that somehow (remember I can't alter table to add a virtual column either).
Is there any kind of index to be used in such situation?
I am using Oracle 11gR2
You can use function based index:
create index two_digits_idx on table_name (substr(account_number, -2));
This index will work only in queries like that:
select ...
from table_name t ...
where substr(account_number, -2) = '25' -- or any other two digits
For using index, you need to use in a query the same expression like in an index.

How do I store a Cassandra table solely in descending date order?

I have a table that stores millions of url, date and name entries. Each row is unique in terms of either:
url + date
or
date + name.
I require this table to be stored in descending date order so that when I query it I can simply "SELECT * FROM mytable LIMIT 1000" to get me the most recent 1000 records, no sorting involved. Does anyone know how to set things up to do this please? To the best of my current understanding I am trying the following but it does not store them in date order:
CREATE TABLE mytable (
url text,
date timestamp,
name text,
PRIMARY KEY ((url, name), date)
)
WITH CLUSTERING ORDER BY (date DESC);
To store the data according to an order, you'd need to change the partitioner to byte ordered. This is no longer a good idea...it's maintained for back compat, but there are issues:
http://www.datastax.com/documentation/cassandra/2.1/cassandra/architecture/architecturePartitionerBOP_c.html
You could also apply bucketing and query over your buckets. Each bucket being a partition, and each partition would have data stored in order. Not exactly what you want, but worth trying.

Computed column index

I have a table Com_Main which contains column CompanyName nvarchar(250). It has average length of 19, max length = 250.
To improve performance I want to add a computed column left20_CompanyName which holds the first 20 characters of CompanyName:
alter table Com_main
add left20_CompanyName as LEFT(CompanyName, 20) PERSISTED
Then I create Index on this column:
create index ix_com_main_left20CompanyName
on Com_main (LEFT20_CompanyName)
So when I use
select CompanyName from Com_Main
where LEFT20_CompanyName LIKE '122%'
it uses this nonclustered index, but when the query is like:
select CompanyName from Com_Main
where CompanyName LIKE '122%'
It uses full table scan, and don't use this index.
So the question:
Is it possible to make SQL Server use this index on computable column in last query?
No. MySQL supports partial indexing of varchar columns but MS SQL Server does not.
You might be able to speed up table scans through partitioning but I don't know how smart SQL Server is in this regard.
I don't think the SQL query engine would realize that the LEFT20_CompanyName column maps over so neatly to the CompanyName column - since a computed column could use virtually any formula, there's no way for it to know that the index on that other column is actually useful in this case.
Why not just create the index on the CompanyName column? So what if a few values in that field are longer than average? If you create it directly on the column and avoid the computed column altogether, I think it will use the index in both cases.
Maybe I'm missing something, but I'm not sure what you're trying to gain by doing the computed column on only the first 20 characters.

Resources